Jan 03 03:14:39 crc systemd[1]: Starting Kubernetes Kubelet... Jan 03 03:14:39 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:39 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 03:14:40 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 03 03:14:40 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 03 03:14:40 crc kubenswrapper[4746]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 03 03:14:40 crc kubenswrapper[4746]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 03 03:14:40 crc kubenswrapper[4746]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 03 03:14:40 crc kubenswrapper[4746]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 03 03:14:40 crc kubenswrapper[4746]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 03 03:14:40 crc kubenswrapper[4746]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.279437 4746 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282392 4746 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282412 4746 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282418 4746 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282422 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282426 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282430 4746 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282433 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282437 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282440 4746 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282444 4746 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282448 4746 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282451 4746 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282455 4746 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282466 4746 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282471 4746 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282476 4746 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282482 4746 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282488 4746 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282493 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282498 4746 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282505 4746 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282512 4746 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282517 4746 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282523 4746 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282528 4746 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282534 4746 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282538 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282541 4746 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282545 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282550 4746 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282556 4746 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282560 4746 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282564 4746 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282569 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282573 4746 feature_gate.go:330] unrecognized feature gate: Example Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282577 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282581 4746 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282584 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282588 4746 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282592 4746 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282595 4746 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282600 4746 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282605 4746 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282608 4746 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282612 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282616 4746 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282620 4746 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282623 4746 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282627 4746 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282632 4746 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282636 4746 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282640 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282644 4746 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282667 4746 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282671 4746 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282674 4746 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282678 4746 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282681 4746 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282685 4746 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282688 4746 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282692 4746 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282695 4746 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282699 4746 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282703 4746 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282708 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282713 4746 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282717 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282721 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282725 4746 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282729 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.282732 4746 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282929 4746 flags.go:64] FLAG: --address="0.0.0.0" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282941 4746 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282949 4746 flags.go:64] FLAG: --anonymous-auth="true" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282955 4746 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282960 4746 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282964 4746 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282971 4746 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282976 4746 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282980 4746 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282985 4746 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282989 4746 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282994 4746 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.282998 4746 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283002 4746 flags.go:64] FLAG: --cgroup-root="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283008 4746 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283012 4746 flags.go:64] FLAG: --client-ca-file="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283016 4746 flags.go:64] FLAG: --cloud-config="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283021 4746 flags.go:64] FLAG: --cloud-provider="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283025 4746 flags.go:64] FLAG: --cluster-dns="[]" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283030 4746 flags.go:64] FLAG: --cluster-domain="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283034 4746 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283038 4746 flags.go:64] FLAG: --config-dir="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283042 4746 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283046 4746 flags.go:64] FLAG: --container-log-max-files="5" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283051 4746 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283055 4746 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283059 4746 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283063 4746 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283067 4746 flags.go:64] FLAG: --contention-profiling="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283071 4746 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283075 4746 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283079 4746 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283084 4746 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283089 4746 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283093 4746 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283097 4746 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283101 4746 flags.go:64] FLAG: --enable-load-reader="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283105 4746 flags.go:64] FLAG: --enable-server="true" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283110 4746 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283115 4746 flags.go:64] FLAG: --event-burst="100" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283119 4746 flags.go:64] FLAG: --event-qps="50" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283123 4746 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283127 4746 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283131 4746 flags.go:64] FLAG: --eviction-hard="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283136 4746 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283139 4746 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283143 4746 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283147 4746 flags.go:64] FLAG: --eviction-soft="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283151 4746 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283155 4746 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283160 4746 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283164 4746 flags.go:64] FLAG: --experimental-mounter-path="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283168 4746 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283172 4746 flags.go:64] FLAG: --fail-swap-on="true" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283175 4746 flags.go:64] FLAG: --feature-gates="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283180 4746 flags.go:64] FLAG: --file-check-frequency="20s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283184 4746 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283188 4746 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283194 4746 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283198 4746 flags.go:64] FLAG: --healthz-port="10248" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283203 4746 flags.go:64] FLAG: --help="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283227 4746 flags.go:64] FLAG: --hostname-override="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283231 4746 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283236 4746 flags.go:64] FLAG: --http-check-frequency="20s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283241 4746 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283245 4746 flags.go:64] FLAG: --image-credential-provider-config="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283249 4746 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283254 4746 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283259 4746 flags.go:64] FLAG: --image-service-endpoint="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283264 4746 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283269 4746 flags.go:64] FLAG: --kube-api-burst="100" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283274 4746 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283279 4746 flags.go:64] FLAG: --kube-api-qps="50" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283284 4746 flags.go:64] FLAG: --kube-reserved="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283289 4746 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283293 4746 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283297 4746 flags.go:64] FLAG: --kubelet-cgroups="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283301 4746 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283305 4746 flags.go:64] FLAG: --lock-file="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283309 4746 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283313 4746 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283317 4746 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283323 4746 flags.go:64] FLAG: --log-json-split-stream="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283327 4746 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283331 4746 flags.go:64] FLAG: --log-text-split-stream="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283342 4746 flags.go:64] FLAG: --logging-format="text" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283348 4746 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283353 4746 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283357 4746 flags.go:64] FLAG: --manifest-url="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283361 4746 flags.go:64] FLAG: --manifest-url-header="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283367 4746 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283371 4746 flags.go:64] FLAG: --max-open-files="1000000" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283376 4746 flags.go:64] FLAG: --max-pods="110" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283380 4746 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283384 4746 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283389 4746 flags.go:64] FLAG: --memory-manager-policy="None" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283393 4746 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283397 4746 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283401 4746 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283405 4746 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283415 4746 flags.go:64] FLAG: --node-status-max-images="50" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283419 4746 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283423 4746 flags.go:64] FLAG: --oom-score-adj="-999" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283428 4746 flags.go:64] FLAG: --pod-cidr="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283432 4746 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283438 4746 flags.go:64] FLAG: --pod-manifest-path="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283442 4746 flags.go:64] FLAG: --pod-max-pids="-1" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283446 4746 flags.go:64] FLAG: --pods-per-core="0" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283450 4746 flags.go:64] FLAG: --port="10250" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283454 4746 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283458 4746 flags.go:64] FLAG: --provider-id="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283462 4746 flags.go:64] FLAG: --qos-reserved="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283466 4746 flags.go:64] FLAG: --read-only-port="10255" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283470 4746 flags.go:64] FLAG: --register-node="true" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283474 4746 flags.go:64] FLAG: --register-schedulable="true" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283478 4746 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283484 4746 flags.go:64] FLAG: --registry-burst="10" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283488 4746 flags.go:64] FLAG: --registry-qps="5" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283492 4746 flags.go:64] FLAG: --reserved-cpus="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283496 4746 flags.go:64] FLAG: --reserved-memory="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283501 4746 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283505 4746 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283510 4746 flags.go:64] FLAG: --rotate-certificates="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283514 4746 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283518 4746 flags.go:64] FLAG: --runonce="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283522 4746 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283526 4746 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283530 4746 flags.go:64] FLAG: --seccomp-default="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283540 4746 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283545 4746 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283548 4746 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283553 4746 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283557 4746 flags.go:64] FLAG: --storage-driver-password="root" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283561 4746 flags.go:64] FLAG: --storage-driver-secure="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283565 4746 flags.go:64] FLAG: --storage-driver-table="stats" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283569 4746 flags.go:64] FLAG: --storage-driver-user="root" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283573 4746 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283577 4746 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283581 4746 flags.go:64] FLAG: --system-cgroups="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283585 4746 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283590 4746 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283595 4746 flags.go:64] FLAG: --tls-cert-file="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283598 4746 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283603 4746 flags.go:64] FLAG: --tls-min-version="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283611 4746 flags.go:64] FLAG: --tls-private-key-file="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283617 4746 flags.go:64] FLAG: --topology-manager-policy="none" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283621 4746 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283625 4746 flags.go:64] FLAG: --topology-manager-scope="container" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283630 4746 flags.go:64] FLAG: --v="2" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283636 4746 flags.go:64] FLAG: --version="false" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283642 4746 flags.go:64] FLAG: --vmodule="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283648 4746 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.283668 4746 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283793 4746 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283799 4746 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283803 4746 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283807 4746 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283811 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283814 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283818 4746 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283823 4746 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283827 4746 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283830 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283833 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283837 4746 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283840 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283843 4746 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283848 4746 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283853 4746 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283857 4746 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283861 4746 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283864 4746 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283868 4746 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283872 4746 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283876 4746 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283879 4746 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283885 4746 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283890 4746 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283893 4746 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283897 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283900 4746 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283904 4746 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283907 4746 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283910 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283914 4746 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283917 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283921 4746 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283924 4746 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283928 4746 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283931 4746 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283935 4746 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283938 4746 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283943 4746 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283946 4746 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283950 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283954 4746 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283957 4746 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283961 4746 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283964 4746 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283968 4746 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283971 4746 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283974 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283979 4746 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283983 4746 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283986 4746 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283990 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283993 4746 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.283996 4746 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284002 4746 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284006 4746 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284011 4746 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284017 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284023 4746 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284028 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284032 4746 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284038 4746 feature_gate.go:330] unrecognized feature gate: Example Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284042 4746 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284048 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284052 4746 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284057 4746 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284061 4746 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284065 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284069 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.284074 4746 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.284250 4746 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.294051 4746 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.294116 4746 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294242 4746 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294256 4746 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294266 4746 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294275 4746 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294283 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294291 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294299 4746 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294308 4746 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294315 4746 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294327 4746 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294340 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294348 4746 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294357 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294365 4746 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294373 4746 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294382 4746 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294390 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294399 4746 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294408 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294416 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294425 4746 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294433 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294441 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294449 4746 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294457 4746 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294465 4746 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294473 4746 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294482 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294491 4746 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294499 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294507 4746 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294516 4746 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294525 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294533 4746 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294540 4746 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294548 4746 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294556 4746 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294563 4746 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294571 4746 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294579 4746 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294586 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294594 4746 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294601 4746 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294609 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294616 4746 feature_gate.go:330] unrecognized feature gate: Example Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294624 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294631 4746 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294639 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294646 4746 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294675 4746 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294684 4746 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294692 4746 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294700 4746 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294707 4746 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294715 4746 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294722 4746 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294733 4746 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294742 4746 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294750 4746 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294760 4746 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294768 4746 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294778 4746 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294788 4746 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294796 4746 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294804 4746 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294814 4746 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294822 4746 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294832 4746 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294843 4746 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294851 4746 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.294860 4746 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.294874 4746 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295084 4746 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295098 4746 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295109 4746 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295120 4746 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295129 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295138 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295146 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295154 4746 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295162 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295170 4746 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295177 4746 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295185 4746 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295193 4746 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295217 4746 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295225 4746 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295233 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295241 4746 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295249 4746 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295256 4746 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295267 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295276 4746 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295286 4746 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295296 4746 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295305 4746 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295314 4746 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295323 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295331 4746 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295339 4746 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295346 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295354 4746 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295362 4746 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295370 4746 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295379 4746 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295389 4746 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295398 4746 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295408 4746 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295417 4746 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295425 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295433 4746 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295440 4746 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295448 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295457 4746 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295464 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295472 4746 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295480 4746 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295488 4746 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295496 4746 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295506 4746 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295515 4746 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295523 4746 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295530 4746 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295539 4746 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295547 4746 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295554 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295562 4746 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295569 4746 feature_gate.go:330] unrecognized feature gate: Example Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295577 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295586 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295593 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295601 4746 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295608 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295616 4746 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295624 4746 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295631 4746 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295639 4746 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295646 4746 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295684 4746 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295693 4746 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295702 4746 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295710 4746 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.295717 4746 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.295730 4746 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.296022 4746 server.go:940] "Client rotation is on, will bootstrap in background" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.303268 4746 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.303469 4746 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.304314 4746 server.go:997] "Starting client certificate rotation" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.304343 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.304780 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-10 03:21:25.420544824 +0000 UTC Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.304902 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.310503 4746 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.312936 4746 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.313285 4746 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.325159 4746 log.go:25] "Validated CRI v1 runtime API" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.338808 4746 log.go:25] "Validated CRI v1 image API" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.341343 4746 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.345244 4746 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-03-03-10-57-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.345323 4746 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.377349 4746 manager.go:217] Machine: {Timestamp:2026-01-03 03:14:40.37450811 +0000 UTC m=+0.224398475 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e0c9d956-6366-4423-bba4-4b3a38c60b92 BootID:6aefa87f-1f87-4c4a-a02a-a9b058286472 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3e:56:46 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3e:56:46 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fe:66:ef Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:6d:a5:2b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:14:ad:14 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:2d:a9:1b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:5e:5b:41:18:88:a7 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e6:75:79:7a:24:b4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.377774 4746 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.377965 4746 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.379246 4746 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.379520 4746 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.379622 4746 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.380107 4746 topology_manager.go:138] "Creating topology manager with none policy" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.380126 4746 container_manager_linux.go:303] "Creating device plugin manager" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.380557 4746 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.380728 4746 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.381268 4746 state_mem.go:36] "Initialized new in-memory state store" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.381401 4746 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.382144 4746 kubelet.go:418] "Attempting to sync node with API server" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.382172 4746 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.382207 4746 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.382223 4746 kubelet.go:324] "Adding apiserver pod source" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.382238 4746 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.384383 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.384477 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.384430 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.384593 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.385137 4746 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.385782 4746 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.387201 4746 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388257 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388308 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388326 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388341 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388363 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388379 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388394 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388417 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388434 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388449 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388473 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388488 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.388974 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.389742 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.389822 4746 server.go:1280] "Started kubelet" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.390241 4746 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.390743 4746 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.391478 4746 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.392344 4746 server.go:460] "Adding debug handlers to kubelet server" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.393197 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.393256 4746 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.393279 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:00:25.214051953 +0000 UTC Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.393318 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 343h45m44.820736607s for next certificate rotation Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.393460 4746 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.393551 4746 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.393560 4746 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.393902 4746 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.394362 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.394331 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="200ms" Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.394459 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.394787 4746 factory.go:55] Registering systemd factory Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.394829 4746 factory.go:221] Registration of the systemd container factory successfully Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.395273 4746 factory.go:153] Registering CRI-O factory Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.395315 4746 factory.go:221] Registration of the crio container factory successfully Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.395431 4746 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.395476 4746 factory.go:103] Registering Raw factory Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.395510 4746 manager.go:1196] Started watching for new ooms in manager Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.395048 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.66:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18871a1a9e136786 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 03:14:40.38975271 +0000 UTC m=+0.239643045,LastTimestamp:2026-01-03 03:14:40.38975271 +0000 UTC m=+0.239643045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.396569 4746 manager.go:319] Starting recovery of all containers Jan 03 03:14:40 crc systemd[1]: Started Kubernetes Kubelet. Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.413416 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414167 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414215 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414245 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414272 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414299 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414325 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414355 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414390 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414422 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414451 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414485 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414517 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.414554 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415361 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415431 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415467 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415503 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415533 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415567 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415598 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415630 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415747 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415788 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415822 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415850 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415904 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415937 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.415969 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416000 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416032 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416073 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416102 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416129 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416158 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416252 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416289 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416322 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416354 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416382 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416413 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416441 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416471 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416500 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416528 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416613 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416652 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416725 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416881 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416915 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.416949 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417038 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417082 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417117 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417147 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417178 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417210 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417240 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417270 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417301 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417330 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417362 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417392 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417426 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417452 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417478 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417504 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417531 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417557 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417584 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417610 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417631 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417697 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417722 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417742 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417759 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417778 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417796 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417814 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417836 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417856 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417877 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417901 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417920 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417940 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417961 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417980 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.417999 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418022 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418046 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418077 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418103 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418128 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418149 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418170 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418194 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418216 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418239 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418261 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418285 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418307 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418329 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418350 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418371 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418401 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418423 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418446 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418470 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418492 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418514 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418535 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418563 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418586 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418608 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418632 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418686 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418718 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418746 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418766 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418788 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418809 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418827 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418847 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418867 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418886 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418906 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418925 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418947 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418968 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.418987 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.419010 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.419032 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.419061 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.419085 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420187 4746 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420234 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420262 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420285 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420304 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420332 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420359 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420386 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420413 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420446 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420478 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420508 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420539 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420573 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420604 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420636 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420708 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420737 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420762 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420798 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420824 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420853 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420878 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420908 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420938 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420966 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.420991 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421019 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421046 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421071 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421095 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421121 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421147 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421175 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421202 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421226 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421252 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421279 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421304 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421330 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421356 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421391 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421419 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421446 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421473 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421497 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421524 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421550 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421573 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421597 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421621 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421650 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421725 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421753 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421785 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421810 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421840 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421871 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421900 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421928 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421955 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.421981 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422011 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422037 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422064 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422091 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422115 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422140 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422166 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422192 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422218 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422299 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422327 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422359 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422383 4746 reconstruct.go:97] "Volume reconstruction finished" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.422400 4746 reconciler.go:26] "Reconciler: start to sync state" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.424626 4746 manager.go:324] Recovery completed Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.437062 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.439469 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.439526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.439544 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.441618 4746 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.441636 4746 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.441671 4746 state_mem.go:36] "Initialized new in-memory state store" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.456800 4746 policy_none.go:49] "None policy: Start" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.458260 4746 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.458288 4746 state_mem.go:35] "Initializing new in-memory state store" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.461502 4746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.463558 4746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.463594 4746 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.463619 4746 kubelet.go:2335] "Starting kubelet main sync loop" Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.463688 4746 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.464841 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.464917 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.493682 4746 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.516087 4746 manager.go:334] "Starting Device Plugin manager" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.516299 4746 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.516320 4746 server.go:79] "Starting device plugin registration server" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.516981 4746 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.517002 4746 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.517322 4746 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.517427 4746 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.517459 4746 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.523884 4746 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.564366 4746 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.564520 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.565726 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.565791 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.565817 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.566032 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.566439 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.566548 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.567143 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.567191 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.567200 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.567326 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.567433 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.567455 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.567836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.567897 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.567921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.568320 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.568344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.568353 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.568619 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.568697 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.568714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.568940 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.568994 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.569016 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.569722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.569740 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.569780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.570075 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.570111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.570125 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.570228 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.570335 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.570364 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.570853 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.570885 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.570899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.571206 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.571232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.571242 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.571270 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.571311 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.572335 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.572359 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.572369 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.595538 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="400ms" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.617527 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.618952 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.619020 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.619038 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.619066 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.619527 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.66:6443: connect: connection refused" node="crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.624741 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.624790 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.624823 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.624851 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.624881 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.624912 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.624938 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.624971 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.624995 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.625011 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.625052 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.625088 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.625114 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.625147 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.625169 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.726648 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.726799 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.726836 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.726896 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.726992 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727017 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727025 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727122 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727131 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727162 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727209 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727231 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727251 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727254 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727271 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727294 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727314 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727336 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727306 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727388 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727464 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727541 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727608 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727652 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727732 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727802 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727875 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.727967 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.728009 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.728080 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.820040 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.821465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.821534 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.821553 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.821609 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.822381 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.66:6443: connect: connection refused" node="crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.911367 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.926940 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.933909 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.936187 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e3258fc3d487bbbb138f1a6ef86710514b2f308f32cf6c74624818770bcc7711 WatchSource:0}: Error finding container e3258fc3d487bbbb138f1a6ef86710514b2f308f32cf6c74624818770bcc7711: Status 404 returned error can't find the container with id e3258fc3d487bbbb138f1a6ef86710514b2f308f32cf6c74624818770bcc7711 Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.951004 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.951456 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c20b10942a19701a69a6372c84e6930de41c49422c79c50c20c6c120b1592201 WatchSource:0}: Error finding container c20b10942a19701a69a6372c84e6930de41c49422c79c50c20c6c120b1592201: Status 404 returned error can't find the container with id c20b10942a19701a69a6372c84e6930de41c49422c79c50c20c6c120b1592201 Jan 03 03:14:40 crc kubenswrapper[4746]: I0103 03:14:40.960033 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 03 03:14:40 crc kubenswrapper[4746]: W0103 03:14:40.970364 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-91613aa86c575de17612297ab35026f5bba75aa5d8f5dd24f6a355ad557ef020 WatchSource:0}: Error finding container 91613aa86c575de17612297ab35026f5bba75aa5d8f5dd24f6a355ad557ef020: Status 404 returned error can't find the container with id 91613aa86c575de17612297ab35026f5bba75aa5d8f5dd24f6a355ad557ef020 Jan 03 03:14:40 crc kubenswrapper[4746]: E0103 03:14:40.996915 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="800ms" Jan 03 03:14:41 crc kubenswrapper[4746]: W0103 03:14:41.014378 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b202d2801808d2640c176dca752568a68e84ba6a4f2b7b04535eb1baa2cf7104 WatchSource:0}: Error finding container b202d2801808d2640c176dca752568a68e84ba6a4f2b7b04535eb1baa2cf7104: Status 404 returned error can't find the container with id b202d2801808d2640c176dca752568a68e84ba6a4f2b7b04535eb1baa2cf7104 Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.223540 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:41 crc kubenswrapper[4746]: W0103 03:14:41.224111 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 03 03:14:41 crc kubenswrapper[4746]: E0103 03:14:41.224194 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.225476 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.225502 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.225510 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.225531 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 03:14:41 crc kubenswrapper[4746]: E0103 03:14:41.225904 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.66:6443: connect: connection refused" node="crc" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.390522 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.470028 4746 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e" exitCode=0 Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.470112 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e"} Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.470207 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c20b10942a19701a69a6372c84e6930de41c49422c79c50c20c6c120b1592201"} Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.470336 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.471280 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.471314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.471327 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.471820 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e"} Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.471894 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a5a58e5881dcb1133bd2d0c940a017be01ec2d90390c7707799bed12e84086c2"} Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.473952 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa" exitCode=0 Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.474011 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa"} Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.474039 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e3258fc3d487bbbb138f1a6ef86710514b2f308f32cf6c74624818770bcc7711"} Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.474132 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.474804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.474828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.474838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.475612 4746 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6b6e677b77a82f6ad6da9e80f8c812caea38bf5c95ad75a72051f529b55d3ddd" exitCode=0 Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.475679 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6b6e677b77a82f6ad6da9e80f8c812caea38bf5c95ad75a72051f529b55d3ddd"} Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.475711 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b202d2801808d2640c176dca752568a68e84ba6a4f2b7b04535eb1baa2cf7104"} Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.475867 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.476233 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.476972 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.477002 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.477016 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.477317 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.477348 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.477360 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.478097 4746 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1" exitCode=0 Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.478126 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1"} Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.478146 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"91613aa86c575de17612297ab35026f5bba75aa5d8f5dd24f6a355ad557ef020"} Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.478212 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.479130 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.479152 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:41 crc kubenswrapper[4746]: I0103 03:14:41.479165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:41 crc kubenswrapper[4746]: W0103 03:14:41.509609 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 03 03:14:41 crc kubenswrapper[4746]: E0103 03:14:41.509680 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 03 03:14:41 crc kubenswrapper[4746]: E0103 03:14:41.797487 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="1.6s" Jan 03 03:14:41 crc kubenswrapper[4746]: W0103 03:14:41.860465 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 03 03:14:41 crc kubenswrapper[4746]: E0103 03:14:41.860543 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 03 03:14:41 crc kubenswrapper[4746]: W0103 03:14:41.864453 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 03 03:14:41 crc kubenswrapper[4746]: E0103 03:14:41.864507 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.026627 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.028640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.028678 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.028688 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.028709 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.317887 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.481266 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6e68c157a0cda26a4e1ee7910c94e1a7f76477aec7bfd2f0909efac17943dffa"} Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.481694 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.482486 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.482512 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.482521 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.483742 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7"} Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.483769 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca"} Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.483781 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06"} Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.483859 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.484510 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.484545 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.484556 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.485932 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b"} Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.485978 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1"} Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.485985 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.485989 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960"} Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.487260 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.487288 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.487299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.489003 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044"} Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.489036 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81"} Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.489046 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2"} Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.489055 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a"} Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.490252 4746 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7fd0d612e9b8971c266bdb5f1cbb79e63e81cbf60caa0064be15e662dc64c2a3" exitCode=0 Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.490280 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7fd0d612e9b8971c266bdb5f1cbb79e63e81cbf60caa0064be15e662dc64c2a3"} Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.490355 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.490893 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.490919 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.490928 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:42 crc kubenswrapper[4746]: I0103 03:14:42.849958 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.398072 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.406133 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.497558 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23"} Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.497716 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.499027 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.499078 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.499098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.500492 4746 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5a7150c73bd47a05320dfd3b5526df6d6a2990c80d90fd5b9441969bc4a2e509" exitCode=0 Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.500556 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5a7150c73bd47a05320dfd3b5526df6d6a2990c80d90fd5b9441969bc4a2e509"} Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.500643 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.500768 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.501998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.502037 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.502056 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.502161 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.502203 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:43 crc kubenswrapper[4746]: I0103 03:14:43.502228 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.509451 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"24f54b5dfd9a52208ea24331b57b4933d79084d651a4bb0e802acb8896336987"} Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.509509 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1c58f3915b8ba21a2f08cc0e9923e92178dea4792988545ea876da5e3e5e788f"} Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.509524 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"54e8c9d5db16a894fac0ee567110601d0a1c892577c765800214e462c077e307"} Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.509529 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.509560 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.509641 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.509571 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.510912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.510959 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.510980 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.511225 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.511268 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.511283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:44 crc kubenswrapper[4746]: I0103 03:14:44.860131 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.519261 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"470ef9fc25b7b74267ded985b5f2714bfa12dfb3acd4762f5722753c2b998592"} Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.519304 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"21f25c657a9225e17a2078652eb3a65451f0e8dc69ba0f8149f361ad5ecb34c9"} Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.519352 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.519387 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.519415 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.520554 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.520585 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.520595 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.520784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.520820 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.520831 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.839474 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.850620 4746 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 03:14:45 crc kubenswrapper[4746]: I0103 03:14:45.850698 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 03:14:46 crc kubenswrapper[4746]: I0103 03:14:46.197689 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:46 crc kubenswrapper[4746]: I0103 03:14:46.197850 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 03:14:46 crc kubenswrapper[4746]: I0103 03:14:46.197891 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:46 crc kubenswrapper[4746]: I0103 03:14:46.198962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:46 crc kubenswrapper[4746]: I0103 03:14:46.199008 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:46 crc kubenswrapper[4746]: I0103 03:14:46.199022 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:46 crc kubenswrapper[4746]: I0103 03:14:46.521733 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:46 crc kubenswrapper[4746]: I0103 03:14:46.522726 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:46 crc kubenswrapper[4746]: I0103 03:14:46.522753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:46 crc kubenswrapper[4746]: I0103 03:14:46.522763 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:47 crc kubenswrapper[4746]: I0103 03:14:47.055082 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:47 crc kubenswrapper[4746]: I0103 03:14:47.055268 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:47 crc kubenswrapper[4746]: I0103 03:14:47.057012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:47 crc kubenswrapper[4746]: I0103 03:14:47.057134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:47 crc kubenswrapper[4746]: I0103 03:14:47.057233 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:47 crc kubenswrapper[4746]: I0103 03:14:47.524333 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:47 crc kubenswrapper[4746]: I0103 03:14:47.525216 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:47 crc kubenswrapper[4746]: I0103 03:14:47.525243 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:47 crc kubenswrapper[4746]: I0103 03:14:47.525253 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:48 crc kubenswrapper[4746]: I0103 03:14:48.472140 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:48 crc kubenswrapper[4746]: I0103 03:14:48.472364 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 03:14:48 crc kubenswrapper[4746]: I0103 03:14:48.472417 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:48 crc kubenswrapper[4746]: I0103 03:14:48.474028 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:48 crc kubenswrapper[4746]: I0103 03:14:48.474064 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:48 crc kubenswrapper[4746]: I0103 03:14:48.474076 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:48 crc kubenswrapper[4746]: I0103 03:14:48.492509 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 03 03:14:48 crc kubenswrapper[4746]: I0103 03:14:48.527749 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:48 crc kubenswrapper[4746]: I0103 03:14:48.528926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:48 crc kubenswrapper[4746]: I0103 03:14:48.528968 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:48 crc kubenswrapper[4746]: I0103 03:14:48.528981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:49 crc kubenswrapper[4746]: I0103 03:14:49.023821 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:49 crc kubenswrapper[4746]: I0103 03:14:49.024054 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:49 crc kubenswrapper[4746]: I0103 03:14:49.025498 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:49 crc kubenswrapper[4746]: I0103 03:14:49.025544 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:49 crc kubenswrapper[4746]: I0103 03:14:49.025557 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:49 crc kubenswrapper[4746]: I0103 03:14:49.548368 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 03:14:49 crc kubenswrapper[4746]: I0103 03:14:49.548541 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:49 crc kubenswrapper[4746]: I0103 03:14:49.549852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:49 crc kubenswrapper[4746]: I0103 03:14:49.549901 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:49 crc kubenswrapper[4746]: I0103 03:14:49.549920 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:50 crc kubenswrapper[4746]: E0103 03:14:50.523995 4746 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 03 03:14:52 crc kubenswrapper[4746]: E0103 03:14:52.029927 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 03 03:14:52 crc kubenswrapper[4746]: E0103 03:14:52.319540 4746 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 03 03:14:52 crc kubenswrapper[4746]: I0103 03:14:52.391962 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 03 03:14:52 crc kubenswrapper[4746]: I0103 03:14:52.992438 4746 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 03 03:14:52 crc kubenswrapper[4746]: I0103 03:14:52.992531 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 03 03:14:52 crc kubenswrapper[4746]: I0103 03:14:52.998054 4746 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 03 03:14:52 crc kubenswrapper[4746]: I0103 03:14:52.998106 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 03 03:14:53 crc kubenswrapper[4746]: I0103 03:14:53.630283 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:53 crc kubenswrapper[4746]: I0103 03:14:53.631399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:53 crc kubenswrapper[4746]: I0103 03:14:53.631428 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:53 crc kubenswrapper[4746]: I0103 03:14:53.631437 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:53 crc kubenswrapper[4746]: I0103 03:14:53.631458 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 03 03:14:55 crc kubenswrapper[4746]: I0103 03:14:55.433218 4746 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 03 03:14:55 crc kubenswrapper[4746]: I0103 03:14:55.433345 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 03 03:14:55 crc kubenswrapper[4746]: I0103 03:14:55.851713 4746 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 03:14:55 crc kubenswrapper[4746]: I0103 03:14:55.851806 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 03:14:55 crc kubenswrapper[4746]: I0103 03:14:55.871214 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 03 03:14:55 crc kubenswrapper[4746]: I0103 03:14:55.871430 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:55 crc kubenswrapper[4746]: I0103 03:14:55.873005 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:55 crc kubenswrapper[4746]: I0103 03:14:55.873052 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:55 crc kubenswrapper[4746]: I0103 03:14:55.873070 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:55 crc kubenswrapper[4746]: I0103 03:14:55.897223 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.204833 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.205022 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.205360 4746 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.205408 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.206086 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.206141 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.206152 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.210331 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.548382 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.548382 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.548842 4746 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.548960 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.549579 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.549687 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.549715 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.550004 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.550046 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.550056 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.704737 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.724712 4746 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.742549 4746 csr.go:261] certificate signing request csr-gr7jz is approved, waiting to be issued Jan 03 03:14:56 crc kubenswrapper[4746]: I0103 03:14:56.748399 4746 csr.go:257] certificate signing request csr-gr7jz is issued Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.069887 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.070068 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.071298 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.071340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.071353 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.749478 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-03 03:09:56 +0000 UTC, rotation deadline is 2026-10-18 22:55:52.033746219 +0000 UTC Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.749572 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6931h40m54.284183583s for next certificate rotation Jan 03 03:14:57 crc kubenswrapper[4746]: E0103 03:14:57.992265 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.993205 4746 trace.go:236] Trace[2050632341]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Jan-2026 03:14:44.136) (total time: 13856ms): Jan 03 03:14:57 crc kubenswrapper[4746]: Trace[2050632341]: ---"Objects listed" error: 13856ms (03:14:57.993) Jan 03 03:14:57 crc kubenswrapper[4746]: Trace[2050632341]: [13.856709103s] [13.856709103s] END Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.993244 4746 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.993449 4746 trace.go:236] Trace[403268450]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Jan-2026 03:14:44.857) (total time: 13135ms): Jan 03 03:14:57 crc kubenswrapper[4746]: Trace[403268450]: ---"Objects listed" error: 13135ms (03:14:57.993) Jan 03 03:14:57 crc kubenswrapper[4746]: Trace[403268450]: [13.135998955s] [13.135998955s] END Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.993698 4746 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.994140 4746 trace.go:236] Trace[940551452]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Jan-2026 03:14:44.227) (total time: 13766ms): Jan 03 03:14:57 crc kubenswrapper[4746]: Trace[940551452]: ---"Objects listed" error: 13766ms (03:14:57.994) Jan 03 03:14:57 crc kubenswrapper[4746]: Trace[940551452]: [13.766377178s] [13.766377178s] END Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.994157 4746 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.995220 4746 trace.go:236] Trace[1226317071]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Jan-2026 03:14:43.858) (total time: 14136ms): Jan 03 03:14:57 crc kubenswrapper[4746]: Trace[1226317071]: ---"Objects listed" error: 14136ms (03:14:57.995) Jan 03 03:14:57 crc kubenswrapper[4746]: Trace[1226317071]: [14.136416472s] [14.136416472s] END Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.995236 4746 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 03 03:14:57 crc kubenswrapper[4746]: I0103 03:14:57.995815 4746 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.393250 4746 apiserver.go:52] "Watching apiserver" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.396351 4746 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.396693 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hm664","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.397043 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.397092 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.397188 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.397275 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.397352 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.397638 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.397706 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hm664" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.397866 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.397902 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.397938 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.401022 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.401276 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.401377 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.401599 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.401864 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.401992 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.402070 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.402165 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.402252 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.402690 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.402784 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.403026 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.421727 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.435553 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.447795 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.458005 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.469968 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.478639 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.487781 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.496363 4746 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.503259 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.528871 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.529181 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.529265 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.529336 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.529412 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.529537 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.529638 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.529781 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.529876 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.529571 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.529949 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.530132 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.530266 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.530398 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.530540 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.530957 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531091 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531205 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531313 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531434 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531606 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531708 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531790 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531856 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531937 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.532009 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.532077 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.532142 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.532209 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.533043 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.533639 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.533756 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.533837 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.533907 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.534422 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.534571 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.535082 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.535598 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536150 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536176 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536193 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536211 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536228 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536246 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536265 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536291 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536312 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536331 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536349 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536367 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536384 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536401 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536421 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536439 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536457 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536474 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536492 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536509 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536526 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536545 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536564 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536583 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536599 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536617 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536635 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536674 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536694 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536712 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536732 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536750 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536768 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536788 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536815 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536836 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536854 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536872 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536893 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536913 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536931 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536950 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536969 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536989 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537005 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537022 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537046 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537061 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537083 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537098 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537114 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537130 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537145 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537161 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.530149 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537178 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537195 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537226 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537247 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537265 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537281 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537297 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537315 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537332 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537350 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537366 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537382 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537399 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537464 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537480 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537497 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537513 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537530 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537546 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537566 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537582 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537598 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537614 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537630 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537682 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537702 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537718 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537735 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537752 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537769 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537789 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537809 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537826 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537842 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537871 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537888 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537904 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537920 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537936 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537952 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537977 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537995 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538013 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538032 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538049 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538065 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538080 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538096 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538112 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538133 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538149 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538166 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538191 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538207 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538223 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538242 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538260 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538282 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538304 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538321 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538355 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538392 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538411 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538426 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538445 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538464 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538480 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538498 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538515 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538532 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538548 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538565 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538582 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538599 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538615 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538632 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538648 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.541484 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.541603 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542162 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542223 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542267 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542317 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542613 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542693 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542739 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542786 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542825 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542872 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542914 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542958 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542998 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543035 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543075 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543113 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543149 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543186 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543229 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543272 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543417 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543473 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543514 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543552 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543594 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543635 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543752 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543817 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fllc6\" (UniqueName: \"kubernetes.io/projected/1722955c-53eb-4bf4-91dc-d3478c190baa-kube-api-access-fllc6\") pod \"node-resolver-hm664\" (UID: \"1722955c-53eb-4bf4-91dc-d3478c190baa\") " pod="openshift-dns/node-resolver-hm664" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543859 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543910 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543963 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544017 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544060 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544117 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1722955c-53eb-4bf4-91dc-d3478c190baa-hosts-file\") pod \"node-resolver-hm664\" (UID: \"1722955c-53eb-4bf4-91dc-d3478c190baa\") " pod="openshift-dns/node-resolver-hm664" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544160 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544204 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544247 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544288 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544326 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544375 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544417 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544465 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544570 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544605 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544635 4746 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.530241 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.530438 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.530703 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.530731 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.530773 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.530889 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531005 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531046 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531131 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531182 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.549025 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531455 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531524 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531610 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531797 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.531896 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.532027 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.532107 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.532291 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.532292 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.532554 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.532734 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.532987 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.533134 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.533601 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.534186 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.534364 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.534379 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.534808 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.535030 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.535554 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.536931 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537160 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537391 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537560 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537795 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.549414 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.537997 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538170 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538352 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.538765 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.539179 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.539265 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.539295 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.539407 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.539738 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.539712 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.539784 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.539817 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.540043 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.540340 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.540357 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.540524 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.540646 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.540841 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.541019 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.541146 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.541321 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.541338 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.541387 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.541535 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.541962 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542023 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.542905 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543170 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543296 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543340 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543456 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543601 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.543976 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544037 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544322 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544379 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.544578 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.545229 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.545438 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.545569 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.545624 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.545894 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.546033 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.546152 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.546414 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.546736 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.546894 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.546911 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.546997 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:14:59.046966469 +0000 UTC m=+18.896856974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.547481 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.547718 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.547720 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.547934 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.548009 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.548392 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.548495 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.548816 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.549540 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.549604 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.549772 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.549968 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.550011 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.550407 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.550517 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.552084 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.553402 4746 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.553825 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.554042 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.554366 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:14:59.054337334 +0000 UTC m=+18.904227639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.554489 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.550829 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.555007 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.555152 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.555411 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.555424 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.556908 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.557364 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.557388 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.557897 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.557916 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.557927 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.558293 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.558400 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.558457 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.558545 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.558867 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.559080 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.559223 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.559488 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.559573 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.559631 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.560084 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.560087 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.560293 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.560545 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.560517 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.560635 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.561287 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.561431 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:14:59.061412151 +0000 UTC m=+18.911302696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.561624 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.561858 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.561920 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.562079 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.562578 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.562898 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.562942 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.563038 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.553826 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.563206 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.563429 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.563802 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.564133 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.564325 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.564500 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.564567 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.562321 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.566687 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.564813 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.565045 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.567204 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.565861 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.567980 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.568040 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.568106 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.568129 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.568302 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 03:14:59.068223543 +0000 UTC m=+18.918114068 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.568790 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.569033 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.569802 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.569923 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.570047 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.570591 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.570755 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.570921 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.571177 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.571295 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.571527 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.572131 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23" exitCode=255 Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.572181 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.572201 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23"} Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.572762 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.576415 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.576453 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.576472 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.577220 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 03:14:59.077153294 +0000 UTC m=+18.927043809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.577282 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.577696 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.577770 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.578176 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.578336 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.578411 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.578471 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.578728 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.579313 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.583254 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.585277 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.585439 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.586041 4746 scope.go:117] "RemoveContainer" containerID="d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.586612 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.586871 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.586863 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.587363 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.588075 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.588378 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.588467 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.588476 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.588910 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.590226 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.590278 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.590418 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.590338 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.591166 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.591244 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.596817 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.596996 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.600743 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.601435 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.606290 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.606997 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.609314 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.618756 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.629539 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.630817 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.631225 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.632324 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.640839 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.642599 4746 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.642695 4746 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.644727 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.644773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.644784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.644799 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.644811 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:58Z","lastTransitionTime":"2026-01-03T03:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.646792 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fllc6\" (UniqueName: \"kubernetes.io/projected/1722955c-53eb-4bf4-91dc-d3478c190baa-kube-api-access-fllc6\") pod \"node-resolver-hm664\" (UID: \"1722955c-53eb-4bf4-91dc-d3478c190baa\") " pod="openshift-dns/node-resolver-hm664" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.646821 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.646860 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1722955c-53eb-4bf4-91dc-d3478c190baa-hosts-file\") pod \"node-resolver-hm664\" (UID: \"1722955c-53eb-4bf4-91dc-d3478c190baa\") " pod="openshift-dns/node-resolver-hm664" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.646881 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.646927 4746 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.646936 4746 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.646945 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.646954 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.646963 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.646972 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.646981 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.646991 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647000 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647008 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647017 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647025 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647035 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647043 4746 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647052 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647060 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647069 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647076 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647085 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647094 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647104 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647113 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647121 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647129 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647137 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647145 4746 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647153 4746 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647161 4746 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647169 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647179 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647187 4746 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647196 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647207 4746 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647216 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647224 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647232 4746 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647242 4746 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647250 4746 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647258 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647268 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647276 4746 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647285 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647294 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647302 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647310 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647317 4746 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647326 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647334 4746 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647341 4746 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647350 4746 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647359 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647368 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647375 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647384 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647392 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647400 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647408 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647416 4746 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647423 4746 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647431 4746 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647439 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647449 4746 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647457 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647466 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647566 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647576 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647584 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647592 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647600 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647614 4746 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647623 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647631 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647639 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647647 4746 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647669 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647677 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647685 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647693 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647702 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647712 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647720 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647727 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647735 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647742 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647750 4746 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647757 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647766 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647773 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647782 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647790 4746 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647797 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647805 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647812 4746 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647820 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647828 4746 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647835 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647843 4746 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647851 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647859 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647867 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647876 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647886 4746 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647894 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647902 4746 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647910 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647919 4746 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647928 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647936 4746 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647944 4746 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647952 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647960 4746 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647968 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647978 4746 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647987 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.647994 4746 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648002 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648010 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648019 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648027 4746 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648035 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648042 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648055 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648068 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648077 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648085 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648094 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648167 4746 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648177 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648187 4746 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648196 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648206 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648213 4746 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648221 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648230 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648238 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648247 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648255 4746 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648264 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648272 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648279 4746 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648287 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648296 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648304 4746 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648312 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648320 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648328 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648336 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648345 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648353 4746 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648361 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648369 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648377 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648384 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648392 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648401 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648409 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648417 4746 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648425 4746 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648432 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648439 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648450 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648457 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648465 4746 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648473 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648480 4746 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648491 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648499 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648507 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648514 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648522 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648529 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648538 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648551 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648559 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648566 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648574 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648581 4746 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648588 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648595 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648603 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648611 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648618 4746 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648625 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648632 4746 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648639 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648665 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648675 4746 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648683 4746 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648691 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648699 4746 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648707 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.648754 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.649519 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.649555 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1722955c-53eb-4bf4-91dc-d3478c190baa-hosts-file\") pod \"node-resolver-hm664\" (UID: \"1722955c-53eb-4bf4-91dc-d3478c190baa\") " pod="openshift-dns/node-resolver-hm664" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.660261 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.660699 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.680447 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.680494 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.680739 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.680757 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.680771 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:58Z","lastTransitionTime":"2026-01-03T03:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.682174 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fllc6\" (UniqueName: \"kubernetes.io/projected/1722955c-53eb-4bf4-91dc-d3478c190baa-kube-api-access-fllc6\") pod \"node-resolver-hm664\" (UID: \"1722955c-53eb-4bf4-91dc-d3478c190baa\") " pod="openshift-dns/node-resolver-hm664" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.692235 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.695423 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.699547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.699577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.699587 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.699603 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.699613 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:58Z","lastTransitionTime":"2026-01-03T03:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.708816 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.721335 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.724901 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hm664" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.728021 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.728729 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.731013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.731036 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.731050 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.731073 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.731088 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:58Z","lastTransitionTime":"2026-01-03T03:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.773878 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.782830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.782867 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.782881 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.782903 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.782919 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:58Z","lastTransitionTime":"2026-01-03T03:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:58 crc kubenswrapper[4746]: W0103 03:14:58.796771 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-48199cb716a8b59e44fefc906f08082ec28e4ea5f6689047be0682eeb6c79064 WatchSource:0}: Error finding container 48199cb716a8b59e44fefc906f08082ec28e4ea5f6689047be0682eeb6c79064: Status 404 returned error can't find the container with id 48199cb716a8b59e44fefc906f08082ec28e4ea5f6689047be0682eeb6c79064 Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.797153 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 03 03:14:58 crc kubenswrapper[4746]: E0103 03:14:58.797321 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.798774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.798797 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.798806 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.798821 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.798832 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:58Z","lastTransitionTime":"2026-01-03T03:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:58 crc kubenswrapper[4746]: W0103 03:14:58.801182 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-82edc379af58e947221259bf4cb1c9b477168ac270ddcfd28ba4ebde9dad9365 WatchSource:0}: Error finding container 82edc379af58e947221259bf4cb1c9b477168ac270ddcfd28ba4ebde9dad9365: Status 404 returned error can't find the container with id 82edc379af58e947221259bf4cb1c9b477168ac270ddcfd28ba4ebde9dad9365 Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.901732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.901787 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.901799 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.901822 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:58 crc kubenswrapper[4746]: I0103 03:14:58.901834 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:58Z","lastTransitionTime":"2026-01-03T03:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.003763 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.003797 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.003811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.003828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.003839 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:59Z","lastTransitionTime":"2026-01-03T03:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.056091 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.056231 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.056365 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.056374 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:15:00.056334632 +0000 UTC m=+19.906224957 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.056439 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:00.056420514 +0000 UTC m=+19.906310989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.106807 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.106890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.106930 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.106955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.106971 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:59Z","lastTransitionTime":"2026-01-03T03:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.157135 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.157188 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.157207 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.157330 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.157348 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.157322 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.157412 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.157448 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.157460 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.157384 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.157543 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:00.157484138 +0000 UTC m=+20.007374523 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.157571 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:00.157560169 +0000 UTC m=+20.007450614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:14:59 crc kubenswrapper[4746]: E0103 03:14:59.157611 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:00.15758127 +0000 UTC m=+20.007471575 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.210089 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.210172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.210183 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.210198 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.210207 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:59Z","lastTransitionTime":"2026-01-03T03:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.312481 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.312532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.312563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.312580 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.312590 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:59Z","lastTransitionTime":"2026-01-03T03:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.415461 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.415500 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.415509 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.415524 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.415533 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:59Z","lastTransitionTime":"2026-01-03T03:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.517872 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.517920 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.517940 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.517958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.517974 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:59Z","lastTransitionTime":"2026-01-03T03:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.576601 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hm664" event={"ID":"1722955c-53eb-4bf4-91dc-d3478c190baa","Type":"ContainerStarted","Data":"8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.576671 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hm664" event={"ID":"1722955c-53eb-4bf4-91dc-d3478c190baa","Type":"ContainerStarted","Data":"52ce22781dcf765416df4ed6b72a446c7e2ff97a3ed6eaf39f8501e833c5b4ba"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.578200 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.578293 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0fc29cc94a992347cc5a8907280fe2d8c38b33dcbe04d19e0f947ff4e5d0b7da"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.579733 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.579792 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.579806 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"48199cb716a8b59e44fefc906f08082ec28e4ea5f6689047be0682eeb6c79064"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.586679 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.588574 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.589454 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.592990 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"82edc379af58e947221259bf4cb1c9b477168ac270ddcfd28ba4ebde9dad9365"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.596099 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.610296 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.620356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.620397 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.620410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.620427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.620440 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:59Z","lastTransitionTime":"2026-01-03T03:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.623874 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.635479 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.657601 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.673417 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.701165 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.722671 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.722704 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.722716 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.722729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.722754 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:59Z","lastTransitionTime":"2026-01-03T03:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.745069 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.776282 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.792181 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.824925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.824913 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.825136 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.825163 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.825183 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.825465 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:59Z","lastTransitionTime":"2026-01-03T03:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.842924 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.854943 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-8lt5d"] Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.855396 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.859231 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.859298 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.859412 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.859447 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.872171 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.886812 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.911051 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.923086 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.927709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.927906 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.927992 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.928082 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.928163 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:14:59Z","lastTransitionTime":"2026-01-03T03:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.940005 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.951416 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.963699 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00b3b853-9953-4039-964d-841a01708848-proxy-tls\") pod \"machine-config-daemon-8lt5d\" (UID: \"00b3b853-9953-4039-964d-841a01708848\") " pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.963937 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bfmv\" (UniqueName: \"kubernetes.io/projected/00b3b853-9953-4039-964d-841a01708848-kube-api-access-7bfmv\") pod \"machine-config-daemon-8lt5d\" (UID: \"00b3b853-9953-4039-964d-841a01708848\") " pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.964110 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00b3b853-9953-4039-964d-841a01708848-rootfs\") pod \"machine-config-daemon-8lt5d\" (UID: \"00b3b853-9953-4039-964d-841a01708848\") " pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.964260 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00b3b853-9953-4039-964d-841a01708848-mcd-auth-proxy-config\") pod \"machine-config-daemon-8lt5d\" (UID: \"00b3b853-9953-4039-964d-841a01708848\") " pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.966340 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.977633 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:14:59 crc kubenswrapper[4746]: I0103 03:14:59.989426 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.001350 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:14:59Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.015752 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.029955 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.031069 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.031118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.031132 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.031152 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.031166 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:00Z","lastTransitionTime":"2026-01-03T03:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.046707 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.064977 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.065073 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.065097 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00b3b853-9953-4039-964d-841a01708848-mcd-auth-proxy-config\") pod \"machine-config-daemon-8lt5d\" (UID: \"00b3b853-9953-4039-964d-841a01708848\") " pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.065129 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00b3b853-9953-4039-964d-841a01708848-proxy-tls\") pod \"machine-config-daemon-8lt5d\" (UID: \"00b3b853-9953-4039-964d-841a01708848\") " pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.065148 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00b3b853-9953-4039-964d-841a01708848-rootfs\") pod \"machine-config-daemon-8lt5d\" (UID: \"00b3b853-9953-4039-964d-841a01708848\") " pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.065164 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bfmv\" (UniqueName: \"kubernetes.io/projected/00b3b853-9953-4039-964d-841a01708848-kube-api-access-7bfmv\") pod \"machine-config-daemon-8lt5d\" (UID: \"00b3b853-9953-4039-964d-841a01708848\") " pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.065485 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:15:02.06545983 +0000 UTC m=+21.915350135 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.065523 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00b3b853-9953-4039-964d-841a01708848-rootfs\") pod \"machine-config-daemon-8lt5d\" (UID: \"00b3b853-9953-4039-964d-841a01708848\") " pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.065527 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.065677 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:02.065630434 +0000 UTC m=+21.915520819 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.066453 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00b3b853-9953-4039-964d-841a01708848-mcd-auth-proxy-config\") pod \"machine-config-daemon-8lt5d\" (UID: \"00b3b853-9953-4039-964d-841a01708848\") " pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.068391 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.070968 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00b3b853-9953-4039-964d-841a01708848-proxy-tls\") pod \"machine-config-daemon-8lt5d\" (UID: \"00b3b853-9953-4039-964d-841a01708848\") " pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.086488 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bfmv\" (UniqueName: \"kubernetes.io/projected/00b3b853-9953-4039-964d-841a01708848-kube-api-access-7bfmv\") pod \"machine-config-daemon-8lt5d\" (UID: \"00b3b853-9953-4039-964d-841a01708848\") " pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.133784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.133828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.133836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.133852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.133862 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:00Z","lastTransitionTime":"2026-01-03T03:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.166347 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.166608 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.166540 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.166683 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:02.166668407 +0000 UTC m=+22.016558702 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.166822 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.166889 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.167052 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.167058 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.167111 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.167126 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.167076 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.167184 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:02.167164929 +0000 UTC m=+22.017055284 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.167194 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.167252 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:02.167233991 +0000 UTC m=+22.017124356 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.184518 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00b3b853_9953_4039_964d_841a01708848.slice/crio-0b46ce47b9ba2fb77f24c0526d8f23c76cb149c475610660316aa9d68c218206 WatchSource:0}: Error finding container 0b46ce47b9ba2fb77f24c0526d8f23c76cb149c475610660316aa9d68c218206: Status 404 returned error can't find the container with id 0b46ce47b9ba2fb77f24c0526d8f23c76cb149c475610660316aa9d68c218206 Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.236525 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.236580 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.236594 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.236610 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.236620 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:00Z","lastTransitionTime":"2026-01-03T03:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.267805 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-gnct7"] Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.268305 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rzrbx"] Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.268960 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.268981 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.270532 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-plg55"] Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.271251 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.272129 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.272357 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.272648 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.272911 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.273165 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.273428 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.273494 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.273553 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.273682 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.273803 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.273890 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.273969 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.273987 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.274432 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.293859 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.305089 4746 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.305698 4746 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.305738 4746 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.305765 4746 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.305786 4746 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.305810 4746 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.305977 4746 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.306265 4746 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.306297 4746 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.306727 4746 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.306758 4746 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.306915 4746 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.307071 4746 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.307164 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-4ln5h/status\": read tcp 38.102.83.66:44368->38.102.83.66:6443: use of closed network connection" Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.307511 4746 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.307589 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.66:44368->38.102.83.66:6443: use of closed network connection" event="&Event{ObjectMeta:{machine-config-daemon-8lt5d.18871a1f4099d0de openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-8lt5d,UID:00b3b853-9953-4039-964d-841a01708848,APIVersion:v1,ResourceVersion:26682,FieldPath:spec.containers{kube-rbac-proxy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 03:15:00.296339678 +0000 UTC m=+20.146229983,LastTimestamp:2026-01-03 03:15:00.296339678 +0000 UTC m=+20.146229983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.307817 4746 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.307880 4746 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.307901 4746 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.308008 4746 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.308031 4746 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.308094 4746 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.329172 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.339402 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.339453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.339466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.339487 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.339500 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:00Z","lastTransitionTime":"2026-01-03T03:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.341124 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.359644 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.368958 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-etc-openvswitch\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369005 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-openvswitch\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369068 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-systemd\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369098 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-node-log\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369113 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-multus-conf-dir\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369138 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/784eb651-1784-4e2a-b0ca-34163f44525c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369165 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-cni-netd\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369189 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovnkube-script-lib\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369208 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-multus-cni-dir\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369228 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/784eb651-1784-4e2a-b0ca-34163f44525c-os-release\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369253 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-hostroot\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369288 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-run-multus-certs\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369313 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/784eb651-1784-4e2a-b0ca-34163f44525c-cnibin\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369339 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhbjr\" (UniqueName: \"kubernetes.io/projected/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-kube-api-access-mhbjr\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369377 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-run-ovn-kubernetes\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369398 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7938adea-5f3a-4bfa-8776-f8b06ce7219e-multus-daemon-config\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369417 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-var-lib-openvswitch\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369451 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-slash\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369468 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovnkube-config\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369505 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-os-release\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369534 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-run-netns\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369615 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-systemd-units\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369721 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-ovn\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369745 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87f6x\" (UniqueName: \"kubernetes.io/projected/784eb651-1784-4e2a-b0ca-34163f44525c-kube-api-access-87f6x\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369795 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-var-lib-cni-multus\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369815 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/784eb651-1784-4e2a-b0ca-34163f44525c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369862 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-run-k8s-cni-cncf-io\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369883 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-595s4\" (UniqueName: \"kubernetes.io/projected/7938adea-5f3a-4bfa-8776-f8b06ce7219e-kube-api-access-595s4\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369924 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-env-overrides\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369943 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-multus-socket-dir-parent\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.369958 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/784eb651-1784-4e2a-b0ca-34163f44525c-system-cni-dir\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370009 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-log-socket\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370026 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-cni-bin\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370081 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370101 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovn-node-metrics-cert\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370116 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-cnibin\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370187 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7938adea-5f3a-4bfa-8776-f8b06ce7219e-cni-binary-copy\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370234 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/784eb651-1784-4e2a-b0ca-34163f44525c-cni-binary-copy\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370253 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-kubelet\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370268 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-var-lib-cni-bin\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370320 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-var-lib-kubelet\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370335 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-run-netns\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370382 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-system-cni-dir\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.370398 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-etc-kubernetes\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.376267 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.391965 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.409899 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.423751 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.434643 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.441743 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.441773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.441784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.441798 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.441810 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:00Z","lastTransitionTime":"2026-01-03T03:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.447863 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.458418 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.464825 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.465058 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.465182 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.465284 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.465392 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:00 crc kubenswrapper[4746]: E0103 03:15:00.465486 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.469856 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471056 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-kubelet\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471097 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-var-lib-cni-bin\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471124 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-var-lib-kubelet\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471147 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-etc-kubernetes\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471168 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-run-netns\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471185 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-system-cni-dir\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471208 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-etc-openvswitch\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471223 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-openvswitch\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471238 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-systemd\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471254 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-node-log\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471271 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-multus-conf-dir\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471292 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/784eb651-1784-4e2a-b0ca-34163f44525c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471309 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-multus-cni-dir\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471326 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/784eb651-1784-4e2a-b0ca-34163f44525c-os-release\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471340 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-cni-netd\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471355 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovnkube-script-lib\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471370 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/784eb651-1784-4e2a-b0ca-34163f44525c-cnibin\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471385 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhbjr\" (UniqueName: \"kubernetes.io/projected/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-kube-api-access-mhbjr\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471399 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-hostroot\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471413 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-run-multus-certs\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471428 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-var-lib-openvswitch\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471398 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471483 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-run-ovn-kubernetes\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471522 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-kubelet\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471546 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-var-lib-cni-bin\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471576 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-var-lib-kubelet\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471604 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-etc-kubernetes\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471631 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-run-netns\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471699 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-system-cni-dir\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471733 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-etc-openvswitch\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471765 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-openvswitch\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471449 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-run-ovn-kubernetes\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471800 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7938adea-5f3a-4bfa-8776-f8b06ce7219e-multus-daemon-config\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471821 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovnkube-config\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471831 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-cni-netd\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471844 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-os-release\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471866 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-run-netns\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471879 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-systemd\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471896 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-slash\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471911 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-node-log\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471920 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-ovn\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471938 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-multus-conf-dir\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471942 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-systemd-units\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471971 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-var-lib-cni-multus\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.471990 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87f6x\" (UniqueName: \"kubernetes.io/projected/784eb651-1784-4e2a-b0ca-34163f44525c-kube-api-access-87f6x\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472014 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-run-k8s-cni-cncf-io\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472035 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-595s4\" (UniqueName: \"kubernetes.io/projected/7938adea-5f3a-4bfa-8776-f8b06ce7219e-kube-api-access-595s4\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472058 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/784eb651-1784-4e2a-b0ca-34163f44525c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472211 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/784eb651-1784-4e2a-b0ca-34163f44525c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472276 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/784eb651-1784-4e2a-b0ca-34163f44525c-os-release\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472307 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-multus-cni-dir\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472327 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-run-netns\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472365 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-slash\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472365 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-var-lib-cni-multus\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472412 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-ovn\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472447 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-systemd-units\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472480 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-run-multus-certs\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472615 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/784eb651-1784-4e2a-b0ca-34163f44525c-system-cni-dir\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472689 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/784eb651-1784-4e2a-b0ca-34163f44525c-system-cni-dir\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.472715 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/784eb651-1784-4e2a-b0ca-34163f44525c-cnibin\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473024 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-hostroot\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473078 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovnkube-script-lib\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473112 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-host-run-k8s-cni-cncf-io\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473130 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-var-lib-openvswitch\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473638 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovnkube-config\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473697 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-os-release\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473747 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-log-socket\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473811 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-cni-bin\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473836 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-env-overrides\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473880 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-multus-socket-dir-parent\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473903 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473923 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovn-node-metrics-cert\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473966 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7938adea-5f3a-4bfa-8776-f8b06ce7219e-multus-daemon-config\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.473968 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-cnibin\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.474008 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-cnibin\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.474014 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7938adea-5f3a-4bfa-8776-f8b06ce7219e-cni-binary-copy\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.474256 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-cni-bin\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.474271 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7938adea-5f3a-4bfa-8776-f8b06ce7219e-multus-socket-dir-parent\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.474282 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/784eb651-1784-4e2a-b0ca-34163f44525c-cni-binary-copy\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.474396 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.474448 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.474463 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-log-socket\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.474598 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/784eb651-1784-4e2a-b0ca-34163f44525c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.474787 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7938adea-5f3a-4bfa-8776-f8b06ce7219e-cni-binary-copy\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.475042 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/784eb651-1784-4e2a-b0ca-34163f44525c-cni-binary-copy\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.475031 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-env-overrides\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.476003 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.477372 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.478232 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.480295 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.481282 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovn-node-metrics-cert\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.481859 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.482924 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.483584 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.484534 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.485115 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.486267 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.486801 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.487299 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.488205 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.488741 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.489923 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.490556 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.491174 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.491879 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.491885 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-595s4\" (UniqueName: \"kubernetes.io/projected/7938adea-5f3a-4bfa-8776-f8b06ce7219e-kube-api-access-595s4\") pod \"multus-plg55\" (UID: \"7938adea-5f3a-4bfa-8776-f8b06ce7219e\") " pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.493139 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhbjr\" (UniqueName: \"kubernetes.io/projected/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-kube-api-access-mhbjr\") pod \"ovnkube-node-rzrbx\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.493355 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87f6x\" (UniqueName: \"kubernetes.io/projected/784eb651-1784-4e2a-b0ca-34163f44525c-kube-api-access-87f6x\") pod \"multus-additional-cni-plugins-gnct7\" (UID: \"784eb651-1784-4e2a-b0ca-34163f44525c\") " pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.493402 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.494422 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.494920 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.495341 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.496171 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.496596 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.497721 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.498390 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.499341 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.499918 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.500844 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.501314 4746 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.501415 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.503106 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.504022 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.504428 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.506182 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.507173 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.508182 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.508929 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.510247 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.511398 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.512632 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.513689 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.515065 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.515091 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.516043 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.516762 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.518428 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.519261 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.520140 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.520639 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.521142 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.522234 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.522871 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.529007 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.538431 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.543230 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.543253 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.543262 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.543276 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.543286 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:00Z","lastTransitionTime":"2026-01-03T03:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.549591 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.561498 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.578686 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.583339 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.597893 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.598021 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gnct7" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.601648 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerStarted","Data":"52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.601708 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerStarted","Data":"87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.601722 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerStarted","Data":"0b46ce47b9ba2fb77f24c0526d8f23c76cb149c475610660316aa9d68c218206"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.602425 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-plg55" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.612703 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.627312 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.639682 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.645424 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.645457 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.645467 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.645481 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.645490 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:00Z","lastTransitionTime":"2026-01-03T03:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.649947 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.663030 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.704440 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9a29410_e9d4_4c5a_98cb_e2c56b9170ff.slice/crio-a0f3d7f19faa8de934b20d119d57c5986fa5282c05e4a7cb5ba7d1b2a599e960 WatchSource:0}: Error finding container a0f3d7f19faa8de934b20d119d57c5986fa5282c05e4a7cb5ba7d1b2a599e960: Status 404 returned error can't find the container with id a0f3d7f19faa8de934b20d119d57c5986fa5282c05e4a7cb5ba7d1b2a599e960 Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.711459 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: W0103 03:15:00.716167 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod784eb651_1784_4e2a_b0ca_34163f44525c.slice/crio-7fdb039cde8605dfef5b075bf4e2d522f8ddb15aa8aa777f69206473a6880bb9 WatchSource:0}: Error finding container 7fdb039cde8605dfef5b075bf4e2d522f8ddb15aa8aa777f69206473a6880bb9: Status 404 returned error can't find the container with id 7fdb039cde8605dfef5b075bf4e2d522f8ddb15aa8aa777f69206473a6880bb9 Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.749899 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.758507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.758947 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.758958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.758974 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.758983 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:00Z","lastTransitionTime":"2026-01-03T03:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.773909 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.792110 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.805390 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.822718 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.835832 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.849627 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.862376 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.862426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.862437 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.862459 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.862472 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:00Z","lastTransitionTime":"2026-01-03T03:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.868504 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.885430 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.899543 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.917189 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.934380 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.952153 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.965141 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.965199 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.965209 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.965224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.965235 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:00Z","lastTransitionTime":"2026-01-03T03:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.968340 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.983164 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:00 crc kubenswrapper[4746]: I0103 03:15:00.997208 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.008845 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.020979 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.035559 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.048400 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.067448 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.067490 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.067499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.067514 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.067526 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:01Z","lastTransitionTime":"2026-01-03T03:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.150010 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.170614 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.170679 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.170692 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.170712 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.170726 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:01Z","lastTransitionTime":"2026-01-03T03:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.207379 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.223604 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.273120 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.273163 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.273174 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.273192 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.273203 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:01Z","lastTransitionTime":"2026-01-03T03:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.288562 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.299426 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.352961 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.360399 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.375343 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.375381 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.375390 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.375404 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.375414 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:01Z","lastTransitionTime":"2026-01-03T03:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.411816 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.481075 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.481456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.481465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.481480 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.481491 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:01Z","lastTransitionTime":"2026-01-03T03:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.489263 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.510733 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.546464 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.584076 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.584113 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.584124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.584140 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.584151 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:01Z","lastTransitionTime":"2026-01-03T03:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.593115 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.603887 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.612367 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.613758 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-plg55" event={"ID":"7938adea-5f3a-4bfa-8776-f8b06ce7219e","Type":"ContainerStarted","Data":"7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.613801 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-plg55" event={"ID":"7938adea-5f3a-4bfa-8776-f8b06ce7219e","Type":"ContainerStarted","Data":"419f610d64dda372db75eeac532f4c5e8241572ff0bd2f179aa92029900ec03a"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.615358 4746 generic.go:334] "Generic (PLEG): container finished" podID="784eb651-1784-4e2a-b0ca-34163f44525c" containerID="42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25" exitCode=0 Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.615422 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" event={"ID":"784eb651-1784-4e2a-b0ca-34163f44525c","Type":"ContainerDied","Data":"42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.615447 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" event={"ID":"784eb651-1784-4e2a-b0ca-34163f44525c","Type":"ContainerStarted","Data":"7fdb039cde8605dfef5b075bf4e2d522f8ddb15aa8aa777f69206473a6880bb9"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.616575 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350" exitCode=0 Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.616646 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.616690 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerStarted","Data":"a0f3d7f19faa8de934b20d119d57c5986fa5282c05e4a7cb5ba7d1b2a599e960"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.628491 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.644350 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.654008 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.662159 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.665304 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.675173 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.690148 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.690192 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.690203 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.690219 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.690231 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:01Z","lastTransitionTime":"2026-01-03T03:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.692911 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.694717 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.707141 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.719759 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.734517 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.751261 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.765279 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.812602 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.815072 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.815105 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.815116 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.815134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.815146 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:01Z","lastTransitionTime":"2026-01-03T03:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.817631 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.830838 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.840214 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.853985 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.868981 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.870936 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.888567 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.898701 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.919134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.919161 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.919168 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.919180 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.919190 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:01Z","lastTransitionTime":"2026-01-03T03:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.948766 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:01 crc kubenswrapper[4746]: I0103 03:15:01.988148 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.020795 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.020832 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.020844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.020860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.020870 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:02Z","lastTransitionTime":"2026-01-03T03:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.026840 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.069142 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.115542 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.115674 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.115848 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.115903 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:06.115888278 +0000 UTC m=+25.965778583 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.115992 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:15:06.11597589 +0000 UTC m=+25.965866195 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.122794 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.122819 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.122827 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.122840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.122850 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:02Z","lastTransitionTime":"2026-01-03T03:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.137416 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.176171 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.193775 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.216622 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.216701 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.216746 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.216820 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.216849 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.216862 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.216876 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.216919 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:06.21690235 +0000 UTC m=+26.066792655 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.216929 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.216954 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:06.216937631 +0000 UTC m=+26.066827936 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.216953 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.216987 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.217069 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:06.217051714 +0000 UTC m=+26.066942089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.219177 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tzqwd"] Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.219579 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tzqwd" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.225169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.225216 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.225227 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.225246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.225259 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:02Z","lastTransitionTime":"2026-01-03T03:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.229034 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.238898 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.258898 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.279571 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.298421 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.317377 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91d74e64-7231-46aa-9cef-cb0212ef6396-host\") pod \"node-ca-tzqwd\" (UID: \"91d74e64-7231-46aa-9cef-cb0212ef6396\") " pod="openshift-image-registry/node-ca-tzqwd" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.317431 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/91d74e64-7231-46aa-9cef-cb0212ef6396-serviceca\") pod \"node-ca-tzqwd\" (UID: \"91d74e64-7231-46aa-9cef-cb0212ef6396\") " pod="openshift-image-registry/node-ca-tzqwd" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.317462 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6b5h\" (UniqueName: \"kubernetes.io/projected/91d74e64-7231-46aa-9cef-cb0212ef6396-kube-api-access-l6b5h\") pod \"node-ca-tzqwd\" (UID: \"91d74e64-7231-46aa-9cef-cb0212ef6396\") " pod="openshift-image-registry/node-ca-tzqwd" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.327281 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.327306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.327314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.327327 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.327338 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:02Z","lastTransitionTime":"2026-01-03T03:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.346721 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.388060 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.418252 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91d74e64-7231-46aa-9cef-cb0212ef6396-host\") pod \"node-ca-tzqwd\" (UID: \"91d74e64-7231-46aa-9cef-cb0212ef6396\") " pod="openshift-image-registry/node-ca-tzqwd" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.418300 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/91d74e64-7231-46aa-9cef-cb0212ef6396-serviceca\") pod \"node-ca-tzqwd\" (UID: \"91d74e64-7231-46aa-9cef-cb0212ef6396\") " pod="openshift-image-registry/node-ca-tzqwd" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.418337 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6b5h\" (UniqueName: \"kubernetes.io/projected/91d74e64-7231-46aa-9cef-cb0212ef6396-kube-api-access-l6b5h\") pod \"node-ca-tzqwd\" (UID: \"91d74e64-7231-46aa-9cef-cb0212ef6396\") " pod="openshift-image-registry/node-ca-tzqwd" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.418402 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/91d74e64-7231-46aa-9cef-cb0212ef6396-host\") pod \"node-ca-tzqwd\" (UID: \"91d74e64-7231-46aa-9cef-cb0212ef6396\") " pod="openshift-image-registry/node-ca-tzqwd" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.419231 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/91d74e64-7231-46aa-9cef-cb0212ef6396-serviceca\") pod \"node-ca-tzqwd\" (UID: \"91d74e64-7231-46aa-9cef-cb0212ef6396\") " pod="openshift-image-registry/node-ca-tzqwd" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.429560 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.429599 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.429608 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.429628 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.429638 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:02Z","lastTransitionTime":"2026-01-03T03:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.432538 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.458371 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6b5h\" (UniqueName: \"kubernetes.io/projected/91d74e64-7231-46aa-9cef-cb0212ef6396-kube-api-access-l6b5h\") pod \"node-ca-tzqwd\" (UID: \"91d74e64-7231-46aa-9cef-cb0212ef6396\") " pod="openshift-image-registry/node-ca-tzqwd" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.463995 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.464040 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.464115 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.464211 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.464243 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:02 crc kubenswrapper[4746]: E0103 03:15:02.464339 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.488594 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.532373 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.532412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.532423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.532438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.532448 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:02Z","lastTransitionTime":"2026-01-03T03:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.537894 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tzqwd" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.539211 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: W0103 03:15:02.554950 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91d74e64_7231_46aa_9cef_cb0212ef6396.slice/crio-73215deee9f33965329a677b81b308babe90171a9123de0cd83c85948bca43d2 WatchSource:0}: Error finding container 73215deee9f33965329a677b81b308babe90171a9123de0cd83c85948bca43d2: Status 404 returned error can't find the container with id 73215deee9f33965329a677b81b308babe90171a9123de0cd83c85948bca43d2 Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.568026 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.607116 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.622244 4746 generic.go:334] "Generic (PLEG): container finished" podID="784eb651-1784-4e2a-b0ca-34163f44525c" containerID="02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3" exitCode=0 Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.622303 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" event={"ID":"784eb651-1784-4e2a-b0ca-34163f44525c","Type":"ContainerDied","Data":"02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.635989 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.636024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.636034 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.636049 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.636057 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:02Z","lastTransitionTime":"2026-01-03T03:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.641093 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerStarted","Data":"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.641170 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerStarted","Data":"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.641186 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerStarted","Data":"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.641197 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerStarted","Data":"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.641207 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerStarted","Data":"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.641219 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerStarted","Data":"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.643206 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tzqwd" event={"ID":"91d74e64-7231-46aa-9cef-cb0212ef6396","Type":"ContainerStarted","Data":"73215deee9f33965329a677b81b308babe90171a9123de0cd83c85948bca43d2"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.650990 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.686440 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.727378 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.740029 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.740070 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.740082 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.740098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.740144 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:02Z","lastTransitionTime":"2026-01-03T03:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.771469 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.806410 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.842565 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.842604 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.842615 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.842631 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.842640 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:02Z","lastTransitionTime":"2026-01-03T03:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.846362 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.853702 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.857176 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.884043 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.906808 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.944368 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.944408 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.944416 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.944432 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.944440 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:02Z","lastTransitionTime":"2026-01-03T03:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.948707 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:02 crc kubenswrapper[4746]: I0103 03:15:02.985617 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:02Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.028486 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.046908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.046963 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.046977 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.046994 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.047003 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:03Z","lastTransitionTime":"2026-01-03T03:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.068157 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.114485 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.149510 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.149549 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.149560 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.149573 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.149586 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:03Z","lastTransitionTime":"2026-01-03T03:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.153264 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.186647 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.226107 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.252631 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.252700 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.252715 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.252733 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.252744 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:03Z","lastTransitionTime":"2026-01-03T03:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.266588 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.306716 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.346004 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.354347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.354520 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.354594 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.354682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.354759 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:03Z","lastTransitionTime":"2026-01-03T03:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.388790 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.428600 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.457232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.457268 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.457285 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.457300 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.457311 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:03Z","lastTransitionTime":"2026-01-03T03:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.468123 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.506199 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.550048 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.559600 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.559817 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.559884 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.559949 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.560008 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:03Z","lastTransitionTime":"2026-01-03T03:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.590205 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.631749 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.649103 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tzqwd" event={"ID":"91d74e64-7231-46aa-9cef-cb0212ef6396","Type":"ContainerStarted","Data":"988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f"} Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.653374 4746 generic.go:334] "Generic (PLEG): container finished" podID="784eb651-1784-4e2a-b0ca-34163f44525c" containerID="c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae" exitCode=0 Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.653482 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" event={"ID":"784eb651-1784-4e2a-b0ca-34163f44525c","Type":"ContainerDied","Data":"c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae"} Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.662968 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.663022 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.663039 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.663061 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.663078 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:03Z","lastTransitionTime":"2026-01-03T03:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.675107 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.710965 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.751441 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.765646 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.765724 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.765739 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.765759 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.765773 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:03Z","lastTransitionTime":"2026-01-03T03:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.795415 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.844567 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.869884 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.869925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.869937 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.869955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.869966 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:03Z","lastTransitionTime":"2026-01-03T03:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.870118 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.907194 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.944776 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.972508 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.972549 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.972565 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.972586 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.972605 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:03Z","lastTransitionTime":"2026-01-03T03:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:03 crc kubenswrapper[4746]: I0103 03:15:03.985626 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:03Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.035331 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.069460 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.075054 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.075082 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.075092 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.075105 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.075114 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:04Z","lastTransitionTime":"2026-01-03T03:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.116037 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.149462 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.177681 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.177713 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.177722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.177734 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.177743 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:04Z","lastTransitionTime":"2026-01-03T03:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.188762 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.226877 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.273727 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.280092 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.280148 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.280165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.280194 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.280212 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:04Z","lastTransitionTime":"2026-01-03T03:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.308283 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.353756 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.383060 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.383108 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.383120 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.383138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.383150 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:04Z","lastTransitionTime":"2026-01-03T03:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.388837 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.427848 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.464812 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.464878 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.464924 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:04 crc kubenswrapper[4746]: E0103 03:15:04.465043 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:04 crc kubenswrapper[4746]: E0103 03:15:04.465139 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:04 crc kubenswrapper[4746]: E0103 03:15:04.465193 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.468887 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.485072 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.485102 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.485110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.485125 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.485133 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:04Z","lastTransitionTime":"2026-01-03T03:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.507846 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.587757 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.588225 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.588411 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.588618 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.588876 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:04Z","lastTransitionTime":"2026-01-03T03:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.659439 4746 generic.go:334] "Generic (PLEG): container finished" podID="784eb651-1784-4e2a-b0ca-34163f44525c" containerID="a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3" exitCode=0 Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.659676 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" event={"ID":"784eb651-1784-4e2a-b0ca-34163f44525c","Type":"ContainerDied","Data":"a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3"} Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.665401 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerStarted","Data":"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f"} Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.679513 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.692239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.692269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.692283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.692299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.692310 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:04Z","lastTransitionTime":"2026-01-03T03:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.707473 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.725519 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.746067 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.760457 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.774341 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.785332 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.795404 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.795463 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.795481 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.795506 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.795525 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:04Z","lastTransitionTime":"2026-01-03T03:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.827284 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.867864 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.899795 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.899836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.899848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.899865 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.899878 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:04Z","lastTransitionTime":"2026-01-03T03:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.910735 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.949224 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:04 crc kubenswrapper[4746]: I0103 03:15:04.987633 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:04Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.002154 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.002179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.002188 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.002202 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.002212 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:05Z","lastTransitionTime":"2026-01-03T03:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.029277 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.074024 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.106769 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.106796 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.106804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.106816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.106826 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:05Z","lastTransitionTime":"2026-01-03T03:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.248519 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.248554 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.248566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.248582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.248595 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:05Z","lastTransitionTime":"2026-01-03T03:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.351801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.351878 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.351891 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.351908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.351920 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:05Z","lastTransitionTime":"2026-01-03T03:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.456169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.456233 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.456255 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.456288 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.456311 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:05Z","lastTransitionTime":"2026-01-03T03:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.558788 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.558831 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.558843 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.558860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.558874 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:05Z","lastTransitionTime":"2026-01-03T03:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.661383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.661418 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.661428 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.661444 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.661455 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:05Z","lastTransitionTime":"2026-01-03T03:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.673684 4746 generic.go:334] "Generic (PLEG): container finished" podID="784eb651-1784-4e2a-b0ca-34163f44525c" containerID="4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36" exitCode=0 Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.673696 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" event={"ID":"784eb651-1784-4e2a-b0ca-34163f44525c","Type":"ContainerDied","Data":"4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36"} Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.704298 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.724809 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.743124 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.761992 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.765243 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.765275 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.765289 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.765309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.765322 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:05Z","lastTransitionTime":"2026-01-03T03:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.780477 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.795645 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.808961 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.820403 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.833169 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.866973 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.868643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.868682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.868692 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.868707 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.868716 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:05Z","lastTransitionTime":"2026-01-03T03:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.903487 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.916917 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.929080 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.938726 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:05Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.970395 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.970432 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.970441 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.970456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:05 crc kubenswrapper[4746]: I0103 03:15:05.970465 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:05Z","lastTransitionTime":"2026-01-03T03:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.072962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.073012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.073025 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.073042 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.073053 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:06Z","lastTransitionTime":"2026-01-03T03:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.159618 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.159832 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:15:14.159805186 +0000 UTC m=+34.009695481 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.159990 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.160146 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.160194 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:14.160184975 +0000 UTC m=+34.010075280 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.175098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.175199 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.175285 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.175348 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.175420 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:06Z","lastTransitionTime":"2026-01-03T03:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.260475 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.260546 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.260584 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.260766 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.260800 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.260820 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.260858 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.260896 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:14.260870949 +0000 UTC m=+34.110761294 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.260930 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:14.2609134 +0000 UTC m=+34.110803745 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.261183 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.261269 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.261323 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.261439 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:14.261414632 +0000 UTC m=+34.111304937 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.277793 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.277826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.277839 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.277858 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.277869 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:06Z","lastTransitionTime":"2026-01-03T03:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.385392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.385438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.385455 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.385521 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.385538 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:06Z","lastTransitionTime":"2026-01-03T03:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.464985 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.465173 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.465275 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.465355 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.465427 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:06 crc kubenswrapper[4746]: E0103 03:15:06.465498 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.489514 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.489559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.489576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.489599 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.489617 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:06Z","lastTransitionTime":"2026-01-03T03:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.592990 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.593206 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.593222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.593246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.593265 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:06Z","lastTransitionTime":"2026-01-03T03:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.682046 4746 generic.go:334] "Generic (PLEG): container finished" podID="784eb651-1784-4e2a-b0ca-34163f44525c" containerID="23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e" exitCode=0 Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.682083 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" event={"ID":"784eb651-1784-4e2a-b0ca-34163f44525c","Type":"ContainerDied","Data":"23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e"} Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.695495 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.695518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.695525 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.695537 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.695546 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:06Z","lastTransitionTime":"2026-01-03T03:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.699032 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.719481 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.737399 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.759616 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.775937 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.791645 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.798318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.801854 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.801875 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.801893 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.801906 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:06Z","lastTransitionTime":"2026-01-03T03:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.819435 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.833208 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.844927 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.856783 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.869186 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.882423 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.893939 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.904085 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.904108 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.904117 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.904138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.904147 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:06Z","lastTransitionTime":"2026-01-03T03:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:06 crc kubenswrapper[4746]: I0103 03:15:06.905386 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:06Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.006440 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.006480 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.006493 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.006511 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.006526 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:07Z","lastTransitionTime":"2026-01-03T03:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.109403 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.109451 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.109461 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.109476 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.109488 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:07Z","lastTransitionTime":"2026-01-03T03:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.212476 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.212519 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.212527 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.212543 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.212557 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:07Z","lastTransitionTime":"2026-01-03T03:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.315948 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.315985 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.315994 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.316007 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.316017 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:07Z","lastTransitionTime":"2026-01-03T03:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.418778 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.418815 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.418824 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.418837 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.418847 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:07Z","lastTransitionTime":"2026-01-03T03:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.520972 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.521003 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.521014 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.521032 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.521042 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:07Z","lastTransitionTime":"2026-01-03T03:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.623300 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.623346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.623364 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.623380 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.623392 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:07Z","lastTransitionTime":"2026-01-03T03:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.690759 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" event={"ID":"784eb651-1784-4e2a-b0ca-34163f44525c","Type":"ContainerStarted","Data":"a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a"} Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.698234 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerStarted","Data":"ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e"} Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.698724 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.698767 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.715901 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.726392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.726508 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.726905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.726996 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.727017 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:07Z","lastTransitionTime":"2026-01-03T03:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.730390 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.767973 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.768120 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.768833 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.785581 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.802028 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.824976 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.830159 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.830193 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.830206 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.830226 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.830236 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:07Z","lastTransitionTime":"2026-01-03T03:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.845072 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.865647 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.885259 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.901068 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.932859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.932910 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.932923 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.932944 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.932958 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:07Z","lastTransitionTime":"2026-01-03T03:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.936341 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.960561 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.976415 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:07 crc kubenswrapper[4746]: I0103 03:15:07.989080 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:07Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.008474 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.027114 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.034902 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.034954 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.034974 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.034999 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.035018 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:08Z","lastTransitionTime":"2026-01-03T03:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.043455 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.059618 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.086365 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.111271 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.134219 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.138206 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.138297 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.138322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.138359 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.138387 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:08Z","lastTransitionTime":"2026-01-03T03:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.154887 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.169632 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.184502 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.199283 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.217282 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.236023 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.240670 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.240711 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.240720 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.240736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.240749 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:08Z","lastTransitionTime":"2026-01-03T03:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.255596 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:08Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.344402 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.344512 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.344545 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.344590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.344623 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:08Z","lastTransitionTime":"2026-01-03T03:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.447992 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.448060 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.448083 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.448114 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.448137 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:08Z","lastTransitionTime":"2026-01-03T03:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.464635 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.464834 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:08 crc kubenswrapper[4746]: E0103 03:15:08.465077 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.465160 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:08 crc kubenswrapper[4746]: E0103 03:15:08.465298 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:08 crc kubenswrapper[4746]: E0103 03:15:08.465504 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.551476 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.551532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.551551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.551575 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.551594 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:08Z","lastTransitionTime":"2026-01-03T03:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.655052 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.655118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.655136 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.655162 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.655179 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:08Z","lastTransitionTime":"2026-01-03T03:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.701619 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.758501 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.758542 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.758551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.758567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.758577 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:08Z","lastTransitionTime":"2026-01-03T03:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.861269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.861317 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.861331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.861352 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.861365 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:08Z","lastTransitionTime":"2026-01-03T03:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.963878 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.963911 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.963920 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.963934 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:08 crc kubenswrapper[4746]: I0103 03:15:08.963974 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:08Z","lastTransitionTime":"2026-01-03T03:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.013570 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.013602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.013610 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.013623 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.013632 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: E0103 03:15:09.024925 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.028367 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.029979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.030029 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.030043 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.030063 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.030078 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.045898 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: E0103 03:15:09.049790 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.052589 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.052624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.052690 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.052719 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.052735 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.062617 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: E0103 03:15:09.065884 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.069078 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.069107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.069124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.069143 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.069155 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.075894 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: E0103 03:15:09.080131 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.083048 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.083075 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.083086 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.083103 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.083114 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.084447 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: E0103 03:15:09.093490 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: E0103 03:15:09.093595 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.095411 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.095558 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.095587 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.095597 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.095615 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.095627 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.107327 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.116731 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.130690 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.148076 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.161396 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.172386 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.187868 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.198224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.198258 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.198268 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.198284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.198295 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.201486 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.214763 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.302507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.302891 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.302905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.302922 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.302933 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.405951 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.405996 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.406010 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.406024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.406035 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.508021 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.508064 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.508076 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.508090 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.508101 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.611488 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.611522 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.611537 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.611551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.611563 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.708503 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/0.log" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.714469 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.714527 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.714548 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.714578 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.714598 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.715226 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e" exitCode=1 Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.715327 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.716626 4746 scope.go:117] "RemoveContainer" containerID="ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.739583 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.759053 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.800326 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"I0103 03:15:09.429225 6081 factory.go:656] Stopping watch factory\\\\nI0103 03:15:09.429386 6081 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.429539 6081 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0103 03:15:09.429862 6081 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0103 03:15:09.430146 6081 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430288 6081 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430337 6081 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430483 6081 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.818349 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.818410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.818427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.818453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.818472 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.823909 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.844329 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.859791 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.873833 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.887352 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.904136 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.924885 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.924939 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.924954 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.924974 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.924989 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:09Z","lastTransitionTime":"2026-01-03T03:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.925374 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.944127 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.960920 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.979817 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:09 crc kubenswrapper[4746]: I0103 03:15:09.999552 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:09Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.028913 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.028953 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.028963 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.028977 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.028986 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:10Z","lastTransitionTime":"2026-01-03T03:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.131991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.132052 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.132074 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.132100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.132119 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:10Z","lastTransitionTime":"2026-01-03T03:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.234647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.234711 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.234721 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.234741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.234751 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:10Z","lastTransitionTime":"2026-01-03T03:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.336685 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.336730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.336743 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.336764 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.336778 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:10Z","lastTransitionTime":"2026-01-03T03:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.439528 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.439580 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.439594 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.439611 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.439623 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:10Z","lastTransitionTime":"2026-01-03T03:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.464869 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.464876 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:10 crc kubenswrapper[4746]: E0103 03:15:10.465043 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:10 crc kubenswrapper[4746]: E0103 03:15:10.465128 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.465140 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:10 crc kubenswrapper[4746]: E0103 03:15:10.465465 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.484755 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.501721 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.516391 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.533527 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.542239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.542282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.542295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.542316 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.542330 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:10Z","lastTransitionTime":"2026-01-03T03:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.557686 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"I0103 03:15:09.429225 6081 factory.go:656] Stopping watch factory\\\\nI0103 03:15:09.429386 6081 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.429539 6081 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0103 03:15:09.429862 6081 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0103 03:15:09.430146 6081 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430288 6081 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430337 6081 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430483 6081 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.574077 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.590040 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.603575 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.615370 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.625686 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.637441 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.645009 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.645056 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.645091 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.645109 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.645121 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:10Z","lastTransitionTime":"2026-01-03T03:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.649841 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.663017 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.673713 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.720272 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/0.log" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.722496 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerStarted","Data":"42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9"} Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.722625 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.734427 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.745746 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.747141 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.747178 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.747190 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.747207 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.747221 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:10Z","lastTransitionTime":"2026-01-03T03:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.756066 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.767102 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.778485 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.795577 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"I0103 03:15:09.429225 6081 factory.go:656] Stopping watch factory\\\\nI0103 03:15:09.429386 6081 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.429539 6081 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0103 03:15:09.429862 6081 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0103 03:15:09.430146 6081 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430288 6081 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430337 6081 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430483 6081 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.821422 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.849176 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.849912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.849950 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.849965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.850012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.850025 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:10Z","lastTransitionTime":"2026-01-03T03:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.862969 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.876309 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.889557 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.903366 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.919957 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.933211 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.953196 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.953254 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.953272 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.953298 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:10 crc kubenswrapper[4746]: I0103 03:15:10.953315 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:10Z","lastTransitionTime":"2026-01-03T03:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.055828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.055871 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.055882 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.055900 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.055913 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:11Z","lastTransitionTime":"2026-01-03T03:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.158634 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.158729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.158748 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.158774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.158794 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:11Z","lastTransitionTime":"2026-01-03T03:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.261384 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.261441 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.261459 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.261484 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.261502 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:11Z","lastTransitionTime":"2026-01-03T03:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.364683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.364739 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.364756 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.364779 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.364796 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:11Z","lastTransitionTime":"2026-01-03T03:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.467628 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.467739 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.467758 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.467784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.467802 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:11Z","lastTransitionTime":"2026-01-03T03:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.570626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.570717 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.570743 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.570771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.570794 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:11Z","lastTransitionTime":"2026-01-03T03:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.674076 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.674144 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.674163 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.674190 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.674209 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:11Z","lastTransitionTime":"2026-01-03T03:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.729011 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/1.log" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.730011 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/0.log" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.735203 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9" exitCode=1 Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.735271 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9"} Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.735316 4746 scope.go:117] "RemoveContainer" containerID="ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.736515 4746 scope.go:117] "RemoveContainer" containerID="42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9" Jan 03 03:15:11 crc kubenswrapper[4746]: E0103 03:15:11.736878 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.764169 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.765179 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc"] Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.765991 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.769443 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.770102 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.779758 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.779806 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.779826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.779852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.779872 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:11Z","lastTransitionTime":"2026-01-03T03:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.793158 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.815424 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.819587 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57qzc\" (UniqueName: \"kubernetes.io/projected/0be8c1d3-1da1-4359-a875-be014834495c-kube-api-access-57qzc\") pod \"ovnkube-control-plane-749d76644c-hwmmc\" (UID: \"0be8c1d3-1da1-4359-a875-be014834495c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.819722 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0be8c1d3-1da1-4359-a875-be014834495c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hwmmc\" (UID: \"0be8c1d3-1da1-4359-a875-be014834495c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.820287 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0be8c1d3-1da1-4359-a875-be014834495c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hwmmc\" (UID: \"0be8c1d3-1da1-4359-a875-be014834495c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.820345 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0be8c1d3-1da1-4359-a875-be014834495c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hwmmc\" (UID: \"0be8c1d3-1da1-4359-a875-be014834495c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.839560 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.852945 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.875564 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"I0103 03:15:09.429225 6081 factory.go:656] Stopping watch factory\\\\nI0103 03:15:09.429386 6081 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.429539 6081 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0103 03:15:09.429862 6081 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0103 03:15:09.430146 6081 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430288 6081 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430337 6081 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430483 6081 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:10Z\\\",\\\"message\\\":\\\"ap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 03:15:10.645784 6205 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nI0103 03:15:10.645792 6205 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nF0103 03:15:10.645796 6205 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.882640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.882712 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.882730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.882754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.882771 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:11Z","lastTransitionTime":"2026-01-03T03:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.894451 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.910328 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.921712 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57qzc\" (UniqueName: \"kubernetes.io/projected/0be8c1d3-1da1-4359-a875-be014834495c-kube-api-access-57qzc\") pod \"ovnkube-control-plane-749d76644c-hwmmc\" (UID: \"0be8c1d3-1da1-4359-a875-be014834495c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.921926 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0be8c1d3-1da1-4359-a875-be014834495c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hwmmc\" (UID: \"0be8c1d3-1da1-4359-a875-be014834495c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.922131 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0be8c1d3-1da1-4359-a875-be014834495c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hwmmc\" (UID: \"0be8c1d3-1da1-4359-a875-be014834495c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.922244 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0be8c1d3-1da1-4359-a875-be014834495c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hwmmc\" (UID: \"0be8c1d3-1da1-4359-a875-be014834495c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.923090 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0be8c1d3-1da1-4359-a875-be014834495c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hwmmc\" (UID: \"0be8c1d3-1da1-4359-a875-be014834495c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.924855 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0be8c1d3-1da1-4359-a875-be014834495c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hwmmc\" (UID: \"0be8c1d3-1da1-4359-a875-be014834495c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.925868 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.932958 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0be8c1d3-1da1-4359-a875-be014834495c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hwmmc\" (UID: \"0be8c1d3-1da1-4359-a875-be014834495c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.943746 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.946325 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57qzc\" (UniqueName: \"kubernetes.io/projected/0be8c1d3-1da1-4359-a875-be014834495c-kube-api-access-57qzc\") pod \"ovnkube-control-plane-749d76644c-hwmmc\" (UID: \"0be8c1d3-1da1-4359-a875-be014834495c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.957649 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.971319 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.984827 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.987351 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.987395 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.987409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.987427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:11 crc kubenswrapper[4746]: I0103 03:15:11.987439 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:11Z","lastTransitionTime":"2026-01-03T03:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.004389 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.023604 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.039344 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.061042 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec5012007f4d00997bb9bcbd62cefa62889fcabf1721fcce1e4217e0add5201e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"message\\\":\\\"I0103 03:15:09.429225 6081 factory.go:656] Stopping watch factory\\\\nI0103 03:15:09.429386 6081 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.429539 6081 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0103 03:15:09.429862 6081 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0103 03:15:09.430146 6081 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430288 6081 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430337 6081 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0103 03:15:09.430483 6081 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:10Z\\\",\\\"message\\\":\\\"ap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 03:15:10.645784 6205 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nI0103 03:15:10.645792 6205 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nF0103 03:15:10.645796 6205 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.078573 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.089695 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.089738 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.089753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.089772 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.089787 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:12Z","lastTransitionTime":"2026-01-03T03:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.093494 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.098876 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.109582 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: W0103 03:15:12.115418 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0be8c1d3_1da1_4359_a875_be014834495c.slice/crio-349649502ec3c9db50fe421ae3ca24855250aa58b5895926281a991451e352f1 WatchSource:0}: Error finding container 349649502ec3c9db50fe421ae3ca24855250aa58b5895926281a991451e352f1: Status 404 returned error can't find the container with id 349649502ec3c9db50fe421ae3ca24855250aa58b5895926281a991451e352f1 Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.122982 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.135819 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.147834 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.160852 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.171729 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.189217 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.192400 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.192423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.192431 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.192443 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.192453 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:12Z","lastTransitionTime":"2026-01-03T03:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.204608 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.218070 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.230712 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.294234 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.294291 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.294306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.294346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.294358 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:12Z","lastTransitionTime":"2026-01-03T03:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.397134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.397165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.397173 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.397185 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.397194 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:12Z","lastTransitionTime":"2026-01-03T03:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.464290 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.464332 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:12 crc kubenswrapper[4746]: E0103 03:15:12.464430 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.464296 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:12 crc kubenswrapper[4746]: E0103 03:15:12.464520 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:12 crc kubenswrapper[4746]: E0103 03:15:12.464742 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.500370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.500410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.500425 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.500445 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.500460 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:12Z","lastTransitionTime":"2026-01-03T03:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.602149 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.602202 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.602218 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.602240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.602257 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:12Z","lastTransitionTime":"2026-01-03T03:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.704714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.704774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.704790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.704805 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.704816 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:12Z","lastTransitionTime":"2026-01-03T03:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.739377 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" event={"ID":"0be8c1d3-1da1-4359-a875-be014834495c","Type":"ContainerStarted","Data":"349649502ec3c9db50fe421ae3ca24855250aa58b5895926281a991451e352f1"} Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.742831 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/1.log" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.750875 4746 scope.go:117] "RemoveContainer" containerID="42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9" Jan 03 03:15:12 crc kubenswrapper[4746]: E0103 03:15:12.751212 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.765841 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.779397 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.791728 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.801896 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.807134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.807174 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.807189 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.807211 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.807229 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:12Z","lastTransitionTime":"2026-01-03T03:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.813854 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.826284 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.842445 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.857575 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.871405 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.888500 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.904306 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.909075 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.909110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.909122 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.909138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.909150 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:12Z","lastTransitionTime":"2026-01-03T03:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.917780 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.936579 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:10Z\\\",\\\"message\\\":\\\"ap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 03:15:10.645784 6205 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nI0103 03:15:10.645792 6205 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nF0103 03:15:10.645796 6205 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.950247 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:12 crc kubenswrapper[4746]: I0103 03:15:12.961091 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:12Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.011692 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.011752 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.011761 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.011776 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.011785 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:13Z","lastTransitionTime":"2026-01-03T03:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.113696 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.113730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.113739 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.113752 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.113760 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:13Z","lastTransitionTime":"2026-01-03T03:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.217416 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.217780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.217886 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.218055 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.218137 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:13Z","lastTransitionTime":"2026-01-03T03:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.218275 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-57tv2"] Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.218944 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:13 crc kubenswrapper[4746]: E0103 03:15:13.219005 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.230244 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.234731 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.234936 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clfq6\" (UniqueName: \"kubernetes.io/projected/28a574f3-8744-4d57-aada-e4b328244e19-kube-api-access-clfq6\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.246306 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.258123 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.268619 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.287370 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.303174 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.317086 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.322070 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.322098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.322107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.322122 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.322131 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:13Z","lastTransitionTime":"2026-01-03T03:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.332382 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.336223 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.336262 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clfq6\" (UniqueName: \"kubernetes.io/projected/28a574f3-8744-4d57-aada-e4b328244e19-kube-api-access-clfq6\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:13 crc kubenswrapper[4746]: E0103 03:15:13.336375 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:13 crc kubenswrapper[4746]: E0103 03:15:13.336462 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs podName:28a574f3-8744-4d57-aada-e4b328244e19 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:13.836445809 +0000 UTC m=+33.686336114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs") pod "network-metrics-daemon-57tv2" (UID: "28a574f3-8744-4d57-aada-e4b328244e19") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.350820 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.359861 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clfq6\" (UniqueName: \"kubernetes.io/projected/28a574f3-8744-4d57-aada-e4b328244e19-kube-api-access-clfq6\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.363812 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.380434 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:10Z\\\",\\\"message\\\":\\\"ap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 03:15:10.645784 6205 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nI0103 03:15:10.645792 6205 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nF0103 03:15:10.645796 6205 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.393136 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.401771 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.412329 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.422728 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.424561 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.424589 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.424600 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.424613 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.424622 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:13Z","lastTransitionTime":"2026-01-03T03:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.434512 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.528688 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.528815 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.528894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.528921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.529125 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:13Z","lastTransitionTime":"2026-01-03T03:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.632723 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.632761 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.632772 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.632790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.632864 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:13Z","lastTransitionTime":"2026-01-03T03:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.735491 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.735535 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.735551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.735566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.735577 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:13Z","lastTransitionTime":"2026-01-03T03:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.753014 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" event={"ID":"0be8c1d3-1da1-4359-a875-be014834495c","Type":"ContainerStarted","Data":"5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5"} Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.753065 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" event={"ID":"0be8c1d3-1da1-4359-a875-be014834495c","Type":"ContainerStarted","Data":"7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f"} Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.765772 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.777265 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.787932 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.797251 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.807377 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.818473 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.831159 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.837925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.837950 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.837958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.837971 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.837980 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:13Z","lastTransitionTime":"2026-01-03T03:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.839612 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:13 crc kubenswrapper[4746]: E0103 03:15:13.840048 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:13 crc kubenswrapper[4746]: E0103 03:15:13.840103 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs podName:28a574f3-8744-4d57-aada-e4b328244e19 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:14.840091416 +0000 UTC m=+34.689981721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs") pod "network-metrics-daemon-57tv2" (UID: "28a574f3-8744-4d57-aada-e4b328244e19") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.843376 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.855552 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.865449 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.878711 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.890279 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.906481 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:10Z\\\",\\\"message\\\":\\\"ap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 03:15:10.645784 6205 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nI0103 03:15:10.645792 6205 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nF0103 03:15:10.645796 6205 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.916901 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.928814 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.939619 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.940001 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.940022 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.940033 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.940049 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:13 crc kubenswrapper[4746]: I0103 03:15:13.940057 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:13Z","lastTransitionTime":"2026-01-03T03:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.042186 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.042221 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.042229 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.042244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.042253 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:14Z","lastTransitionTime":"2026-01-03T03:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.144012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.144069 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.144081 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.144096 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.144107 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:14Z","lastTransitionTime":"2026-01-03T03:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.243714 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.243837 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:15:30.243817077 +0000 UTC m=+50.093707392 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.244076 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.244190 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.244246 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:30.244236937 +0000 UTC m=+50.094127242 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.245865 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.245899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.245911 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.245929 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.245940 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:14Z","lastTransitionTime":"2026-01-03T03:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.344716 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.344803 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.344830 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.344901 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:30.344883951 +0000 UTC m=+50.194774246 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.344927 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.345048 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.345084 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.345093 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.345109 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.345108 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.345120 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.345161 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:30.345151337 +0000 UTC m=+50.195041722 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.345200 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:30.345172007 +0000 UTC m=+50.195062362 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.347979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.348017 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.348032 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.348047 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.348057 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:14Z","lastTransitionTime":"2026-01-03T03:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.450492 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.450564 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.450581 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.450605 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.450623 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:14Z","lastTransitionTime":"2026-01-03T03:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.464606 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.464709 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.464743 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.464765 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.464906 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.464984 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.465035 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.465138 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.553182 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.553242 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.553260 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.553284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.553298 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:14Z","lastTransitionTime":"2026-01-03T03:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.656133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.656171 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.656179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.656193 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.656201 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:14Z","lastTransitionTime":"2026-01-03T03:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.757894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.757968 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.757996 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.758018 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.758032 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:14Z","lastTransitionTime":"2026-01-03T03:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.848612 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.848845 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:14 crc kubenswrapper[4746]: E0103 03:15:14.848895 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs podName:28a574f3-8744-4d57-aada-e4b328244e19 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:16.848881536 +0000 UTC m=+36.698771841 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs") pod "network-metrics-daemon-57tv2" (UID: "28a574f3-8744-4d57-aada-e4b328244e19") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.860059 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.860095 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.860110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.860129 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.860145 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:14Z","lastTransitionTime":"2026-01-03T03:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.962751 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.962790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.962801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.962816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:14 crc kubenswrapper[4746]: I0103 03:15:14.962828 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:14Z","lastTransitionTime":"2026-01-03T03:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.064399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.064426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.064434 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.064446 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.064454 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:15Z","lastTransitionTime":"2026-01-03T03:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.166141 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.166186 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.166228 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.166253 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.166304 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:15Z","lastTransitionTime":"2026-01-03T03:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.268189 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.268236 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.268245 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.268257 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.268267 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:15Z","lastTransitionTime":"2026-01-03T03:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.370991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.371026 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.371034 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.371046 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.371057 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:15Z","lastTransitionTime":"2026-01-03T03:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.472905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.472964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.472987 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.473017 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.473039 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:15Z","lastTransitionTime":"2026-01-03T03:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.576551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.576593 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.576606 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.576622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.576636 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:15Z","lastTransitionTime":"2026-01-03T03:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.680321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.680390 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.680413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.680443 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.680462 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:15Z","lastTransitionTime":"2026-01-03T03:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.783184 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.783222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.783233 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.783254 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.783265 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:15Z","lastTransitionTime":"2026-01-03T03:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.886374 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.886417 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.886428 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.886445 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.886460 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:15Z","lastTransitionTime":"2026-01-03T03:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.989791 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.989851 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.989869 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.989898 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:15 crc kubenswrapper[4746]: I0103 03:15:15.989916 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:15Z","lastTransitionTime":"2026-01-03T03:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.092847 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.092895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.092913 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.092937 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.092955 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:16Z","lastTransitionTime":"2026-01-03T03:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.196209 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.196276 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.196293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.196318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.196335 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:16Z","lastTransitionTime":"2026-01-03T03:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.299635 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.299727 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.299745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.299769 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.299790 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:16Z","lastTransitionTime":"2026-01-03T03:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.402172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.402306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.402374 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.402406 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.402427 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:16Z","lastTransitionTime":"2026-01-03T03:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.464225 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.464340 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.464417 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:16 crc kubenswrapper[4746]: E0103 03:15:16.464611 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.464497 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:16 crc kubenswrapper[4746]: E0103 03:15:16.464481 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:16 crc kubenswrapper[4746]: E0103 03:15:16.464766 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:16 crc kubenswrapper[4746]: E0103 03:15:16.464877 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.505535 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.505800 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.505911 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.505996 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.506083 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:16Z","lastTransitionTime":"2026-01-03T03:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.610111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.610202 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.610219 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.610242 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.610260 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:16Z","lastTransitionTime":"2026-01-03T03:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.712890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.712953 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.712971 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.713026 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.713045 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:16Z","lastTransitionTime":"2026-01-03T03:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.815555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.815978 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.816132 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.816282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.816410 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:16Z","lastTransitionTime":"2026-01-03T03:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.870998 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:16 crc kubenswrapper[4746]: E0103 03:15:16.871555 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:16 crc kubenswrapper[4746]: E0103 03:15:16.871874 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs podName:28a574f3-8744-4d57-aada-e4b328244e19 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:20.871841723 +0000 UTC m=+40.721732058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs") pod "network-metrics-daemon-57tv2" (UID: "28a574f3-8744-4d57-aada-e4b328244e19") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.924416 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.925651 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.925792 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.925831 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:16 crc kubenswrapper[4746]: I0103 03:15:16.925854 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:16Z","lastTransitionTime":"2026-01-03T03:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.029041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.029100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.029117 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.029191 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.029211 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:17Z","lastTransitionTime":"2026-01-03T03:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.132318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.132384 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.132406 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.132437 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.132460 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:17Z","lastTransitionTime":"2026-01-03T03:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.235908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.235977 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.235995 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.236021 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.236041 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:17Z","lastTransitionTime":"2026-01-03T03:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.339503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.339585 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.339605 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.339631 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.339649 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:17Z","lastTransitionTime":"2026-01-03T03:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.443440 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.443517 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.443535 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.443564 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.443581 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:17Z","lastTransitionTime":"2026-01-03T03:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.546637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.546712 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.546729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.546754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.546772 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:17Z","lastTransitionTime":"2026-01-03T03:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.648914 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.648955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.648966 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.648982 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.648991 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:17Z","lastTransitionTime":"2026-01-03T03:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.752399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.752444 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.752457 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.752475 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.752488 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:17Z","lastTransitionTime":"2026-01-03T03:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.855734 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.855784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.855794 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.855810 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.855824 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:17Z","lastTransitionTime":"2026-01-03T03:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.958475 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.958505 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.958513 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.958526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:17 crc kubenswrapper[4746]: I0103 03:15:17.958535 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:17Z","lastTransitionTime":"2026-01-03T03:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.060779 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.061075 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.061153 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.061233 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.061303 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:18Z","lastTransitionTime":"2026-01-03T03:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.163433 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.163859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.164019 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.164128 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.164243 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:18Z","lastTransitionTime":"2026-01-03T03:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.267692 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.267755 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.267775 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.267801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.267825 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:18Z","lastTransitionTime":"2026-01-03T03:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.370346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.370379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.370386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.370399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.370409 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:18Z","lastTransitionTime":"2026-01-03T03:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.464849 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.464922 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.464940 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:18 crc kubenswrapper[4746]: E0103 03:15:18.465077 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.465121 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:18 crc kubenswrapper[4746]: E0103 03:15:18.465211 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:18 crc kubenswrapper[4746]: E0103 03:15:18.465292 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:18 crc kubenswrapper[4746]: E0103 03:15:18.465499 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.472864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.472917 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.472933 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.472960 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.472977 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:18Z","lastTransitionTime":"2026-01-03T03:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.575773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.575816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.575824 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.575840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.575849 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:18Z","lastTransitionTime":"2026-01-03T03:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.678078 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.678138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.678146 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.678162 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.678171 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:18Z","lastTransitionTime":"2026-01-03T03:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.780394 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.780438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.780453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.780469 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.780483 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:18Z","lastTransitionTime":"2026-01-03T03:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.884421 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.884466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.884478 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.884497 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.884512 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:18Z","lastTransitionTime":"2026-01-03T03:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.987577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.987616 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.987624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.987639 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:18 crc kubenswrapper[4746]: I0103 03:15:18.987648 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:18Z","lastTransitionTime":"2026-01-03T03:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.090324 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.090392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.090414 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.090445 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.090485 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.196269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.196357 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.196377 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.196406 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.196430 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.298999 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.299066 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.299089 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.299121 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.299143 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.354162 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.354230 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.354255 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.354282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.354314 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: E0103 03:15:19.377403 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:19Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.381543 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.381593 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.381605 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.381622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.381699 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: E0103 03:15:19.400864 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:19Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.405817 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.405869 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.405886 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.405910 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.405929 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: E0103 03:15:19.424961 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:19Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.428862 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.428902 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.428917 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.428938 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.428955 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: E0103 03:15:19.448789 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:19Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.453755 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.453836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.453855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.453881 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.453897 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: E0103 03:15:19.473213 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:19Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:19 crc kubenswrapper[4746]: E0103 03:15:19.473436 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.475320 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.475369 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.475387 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.475410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.475433 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.578323 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.578403 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.578438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.578467 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.578491 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.682045 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.682104 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.682127 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.682156 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.682176 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.785723 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.785757 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.785765 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.785779 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.785790 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.888059 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.888098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.888107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.888123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.888132 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.990389 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.990423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.990432 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.990445 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:19 crc kubenswrapper[4746]: I0103 03:15:19.990453 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:19Z","lastTransitionTime":"2026-01-03T03:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.092839 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.092879 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.092889 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.092909 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.092923 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:20Z","lastTransitionTime":"2026-01-03T03:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.195636 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.195720 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.195735 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.195755 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.195770 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:20Z","lastTransitionTime":"2026-01-03T03:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.297833 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.297894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.297912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.297936 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.297954 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:20Z","lastTransitionTime":"2026-01-03T03:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.401016 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.401084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.401102 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.401129 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.401147 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:20Z","lastTransitionTime":"2026-01-03T03:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.464252 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.464335 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.464333 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.464451 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:20 crc kubenswrapper[4746]: E0103 03:15:20.464468 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:20 crc kubenswrapper[4746]: E0103 03:15:20.464565 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:20 crc kubenswrapper[4746]: E0103 03:15:20.464835 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:20 crc kubenswrapper[4746]: E0103 03:15:20.466400 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.494405 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.506071 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.506505 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.506708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.506882 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.507032 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:20Z","lastTransitionTime":"2026-01-03T03:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.513818 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.535826 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.556871 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.573148 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.589277 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.608444 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.610859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.610895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.610906 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.610922 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.610932 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:20Z","lastTransitionTime":"2026-01-03T03:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.636914 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.657780 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.673162 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.692178 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.711953 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.714067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.714116 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.714128 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.714146 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.714158 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:20Z","lastTransitionTime":"2026-01-03T03:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.719488 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.720906 4746 scope.go:117] "RemoveContainer" containerID="42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9" Jan 03 03:15:20 crc kubenswrapper[4746]: E0103 03:15:20.721170 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.727781 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.758274 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:10Z\\\",\\\"message\\\":\\\"ap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 03:15:10.645784 6205 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nI0103 03:15:10.645792 6205 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nF0103 03:15:10.645796 6205 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.780145 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.794179 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.816760 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.816821 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.816840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.816867 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.816885 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:20Z","lastTransitionTime":"2026-01-03T03:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.914117 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:20 crc kubenswrapper[4746]: E0103 03:15:20.914277 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:20 crc kubenswrapper[4746]: E0103 03:15:20.914338 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs podName:28a574f3-8744-4d57-aada-e4b328244e19 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:28.914321046 +0000 UTC m=+48.764211341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs") pod "network-metrics-daemon-57tv2" (UID: "28a574f3-8744-4d57-aada-e4b328244e19") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.919638 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.919816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.919877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.919958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:20 crc kubenswrapper[4746]: I0103 03:15:20.920014 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:20Z","lastTransitionTime":"2026-01-03T03:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.022518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.023056 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.023307 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.023526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.023751 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:21Z","lastTransitionTime":"2026-01-03T03:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.129925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.129987 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.130007 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.130049 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.130069 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:21Z","lastTransitionTime":"2026-01-03T03:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.234017 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.234329 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.234465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.234707 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.234983 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:21Z","lastTransitionTime":"2026-01-03T03:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.340100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.340158 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.340176 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.340201 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.340219 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:21Z","lastTransitionTime":"2026-01-03T03:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.446644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.447709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.448055 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.448448 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.448925 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:21Z","lastTransitionTime":"2026-01-03T03:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.552702 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.553244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.553441 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.553644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.553859 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:21Z","lastTransitionTime":"2026-01-03T03:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.657516 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.657990 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.658211 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.658404 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.658581 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:21Z","lastTransitionTime":"2026-01-03T03:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.762694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.763150 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.763326 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.763514 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.763724 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:21Z","lastTransitionTime":"2026-01-03T03:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.867919 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.867991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.868006 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.868029 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.868048 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:21Z","lastTransitionTime":"2026-01-03T03:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.971163 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.971226 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.971244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.971273 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:21 crc kubenswrapper[4746]: I0103 03:15:21.971291 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:21Z","lastTransitionTime":"2026-01-03T03:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.074508 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.074560 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.074603 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.074624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.074636 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:22Z","lastTransitionTime":"2026-01-03T03:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.177565 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.177627 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.177644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.177709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.177733 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:22Z","lastTransitionTime":"2026-01-03T03:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.281134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.281210 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.281234 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.281263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.281287 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:22Z","lastTransitionTime":"2026-01-03T03:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.384440 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.384834 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.385050 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.385375 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.385513 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:22Z","lastTransitionTime":"2026-01-03T03:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.464527 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.464629 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:22 crc kubenswrapper[4746]: E0103 03:15:22.464698 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.464755 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:22 crc kubenswrapper[4746]: E0103 03:15:22.464885 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.464932 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:22 crc kubenswrapper[4746]: E0103 03:15:22.464983 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:22 crc kubenswrapper[4746]: E0103 03:15:22.465062 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.488997 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.489047 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.489059 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.489079 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.489098 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:22Z","lastTransitionTime":"2026-01-03T03:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.592121 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.592196 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.592214 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.592244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.592264 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:22Z","lastTransitionTime":"2026-01-03T03:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.695265 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.695319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.695337 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.695361 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.695378 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:22Z","lastTransitionTime":"2026-01-03T03:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.822803 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.822833 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.822842 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.822854 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.822864 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:22Z","lastTransitionTime":"2026-01-03T03:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.925986 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.926099 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.926122 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.926150 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:22 crc kubenswrapper[4746]: I0103 03:15:22.926171 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:22Z","lastTransitionTime":"2026-01-03T03:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.029190 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.029276 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.029295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.029326 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.029346 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:23Z","lastTransitionTime":"2026-01-03T03:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.133253 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.133346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.133371 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.133407 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.133431 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:23Z","lastTransitionTime":"2026-01-03T03:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.236286 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.236342 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.236360 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.236392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.236414 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:23Z","lastTransitionTime":"2026-01-03T03:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.340719 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.340794 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.340818 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.340859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.340890 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:23Z","lastTransitionTime":"2026-01-03T03:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.444336 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.444846 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.445047 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.445204 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.445348 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:23Z","lastTransitionTime":"2026-01-03T03:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.549223 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.550266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.550427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.550559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.550731 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:23Z","lastTransitionTime":"2026-01-03T03:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.653394 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.653898 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.654172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.654340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.654475 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:23Z","lastTransitionTime":"2026-01-03T03:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.757956 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.758259 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.758426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.758577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.758753 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:23Z","lastTransitionTime":"2026-01-03T03:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.861860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.861913 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.861924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.861944 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.861955 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:23Z","lastTransitionTime":"2026-01-03T03:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.965741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.965815 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.965837 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.965863 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:23 crc kubenswrapper[4746]: I0103 03:15:23.965882 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:23Z","lastTransitionTime":"2026-01-03T03:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.068652 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.068784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.068811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.068849 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.068875 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:24Z","lastTransitionTime":"2026-01-03T03:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.172503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.172577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.172599 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.173403 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.173504 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:24Z","lastTransitionTime":"2026-01-03T03:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.276265 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.276344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.276363 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.276392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.276412 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:24Z","lastTransitionTime":"2026-01-03T03:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.379552 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.379610 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.379629 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.379688 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.379708 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:24Z","lastTransitionTime":"2026-01-03T03:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.464076 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.464164 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.464313 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.464620 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:24 crc kubenswrapper[4746]: E0103 03:15:24.464604 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:24 crc kubenswrapper[4746]: E0103 03:15:24.464797 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:24 crc kubenswrapper[4746]: E0103 03:15:24.464966 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:24 crc kubenswrapper[4746]: E0103 03:15:24.465130 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.482647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.482738 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.482757 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.482786 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.482808 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:24Z","lastTransitionTime":"2026-01-03T03:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.586332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.586400 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.586420 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.586447 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.586467 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:24Z","lastTransitionTime":"2026-01-03T03:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.689575 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.689620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.689637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.689698 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.689726 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:24Z","lastTransitionTime":"2026-01-03T03:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.792761 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.792844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.792876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.792907 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.792939 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:24Z","lastTransitionTime":"2026-01-03T03:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.897065 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.897166 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.897187 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.897217 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:24 crc kubenswrapper[4746]: I0103 03:15:24.897244 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:24Z","lastTransitionTime":"2026-01-03T03:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.000498 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.000565 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.000588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.000621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.000649 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:25Z","lastTransitionTime":"2026-01-03T03:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.104275 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.104333 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.104351 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.104376 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.104396 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:25Z","lastTransitionTime":"2026-01-03T03:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.207389 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.207471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.207489 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.207518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.207537 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:25Z","lastTransitionTime":"2026-01-03T03:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.311499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.311579 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.311601 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.311632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.311651 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:25Z","lastTransitionTime":"2026-01-03T03:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.415301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.415374 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.415392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.415420 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.415440 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:25Z","lastTransitionTime":"2026-01-03T03:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.519093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.519160 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.519181 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.519210 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.519232 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:25Z","lastTransitionTime":"2026-01-03T03:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.622716 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.622818 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.622837 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.622873 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.622896 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:25Z","lastTransitionTime":"2026-01-03T03:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.726225 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.726269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.726280 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.726305 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.726322 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:25Z","lastTransitionTime":"2026-01-03T03:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.830013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.830086 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.830106 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.830133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.830156 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:25Z","lastTransitionTime":"2026-01-03T03:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.933780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.933825 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.933836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.933855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:25 crc kubenswrapper[4746]: I0103 03:15:25.933867 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:25Z","lastTransitionTime":"2026-01-03T03:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.039103 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.039870 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.040616 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.041021 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.041348 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:26Z","lastTransitionTime":"2026-01-03T03:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.144840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.144905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.144925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.144953 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.144972 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:26Z","lastTransitionTime":"2026-01-03T03:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.248955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.249363 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.249536 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.249716 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.249845 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:26Z","lastTransitionTime":"2026-01-03T03:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.353134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.353196 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.353213 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.353238 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.353259 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:26Z","lastTransitionTime":"2026-01-03T03:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.456864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.456936 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.456959 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.456989 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.457008 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:26Z","lastTransitionTime":"2026-01-03T03:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.464275 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.464305 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.464391 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:26 crc kubenswrapper[4746]: E0103 03:15:26.464484 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:26 crc kubenswrapper[4746]: E0103 03:15:26.464598 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.464745 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:26 crc kubenswrapper[4746]: E0103 03:15:26.464757 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:26 crc kubenswrapper[4746]: E0103 03:15:26.464984 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.560495 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.560633 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.560689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.560726 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.560747 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:26Z","lastTransitionTime":"2026-01-03T03:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.664510 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.664746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.664767 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.664797 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.664815 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:26Z","lastTransitionTime":"2026-01-03T03:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.768419 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.768533 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.768553 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.768583 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.768606 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:26Z","lastTransitionTime":"2026-01-03T03:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.871995 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.872076 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.872099 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.872127 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.872187 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:26Z","lastTransitionTime":"2026-01-03T03:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.976235 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.976321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.976347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.976383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:26 crc kubenswrapper[4746]: I0103 03:15:26.976405 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:26Z","lastTransitionTime":"2026-01-03T03:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.079508 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.079577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.079594 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.079618 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.079640 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:27Z","lastTransitionTime":"2026-01-03T03:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.183887 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.183974 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.183999 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.184073 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.184099 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:27Z","lastTransitionTime":"2026-01-03T03:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.287355 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.287436 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.287459 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.287495 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.287524 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:27Z","lastTransitionTime":"2026-01-03T03:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.391197 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.391295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.391355 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.391385 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.391440 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:27Z","lastTransitionTime":"2026-01-03T03:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.494926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.494963 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.494974 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.494989 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.495003 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:27Z","lastTransitionTime":"2026-01-03T03:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.598531 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.598596 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.598615 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.598646 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.598690 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:27Z","lastTransitionTime":"2026-01-03T03:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.702419 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.702497 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.702521 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.702554 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.702582 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:27Z","lastTransitionTime":"2026-01-03T03:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.806493 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.806576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.806597 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.806627 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.806753 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:27Z","lastTransitionTime":"2026-01-03T03:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.909643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.909746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.909766 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.909792 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:27 crc kubenswrapper[4746]: I0103 03:15:27.909809 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:27Z","lastTransitionTime":"2026-01-03T03:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.013771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.013845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.013863 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.013893 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.013919 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:28Z","lastTransitionTime":"2026-01-03T03:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.117376 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.117511 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.117532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.117565 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.117588 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:28Z","lastTransitionTime":"2026-01-03T03:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.222055 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.222123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.222134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.222154 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.222168 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:28Z","lastTransitionTime":"2026-01-03T03:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.326540 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.326601 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.326614 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.326639 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.326717 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:28Z","lastTransitionTime":"2026-01-03T03:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.430346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.430453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.430481 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.430520 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.430546 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:28Z","lastTransitionTime":"2026-01-03T03:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.465051 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:28 crc kubenswrapper[4746]: E0103 03:15:28.465342 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.465036 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.465440 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.465577 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:28 crc kubenswrapper[4746]: E0103 03:15:28.465801 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:28 crc kubenswrapper[4746]: E0103 03:15:28.465646 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:28 crc kubenswrapper[4746]: E0103 03:15:28.466128 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.534282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.534952 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.535453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.535714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.535910 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:28Z","lastTransitionTime":"2026-01-03T03:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.639939 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.639999 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.640014 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.640038 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.640056 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:28Z","lastTransitionTime":"2026-01-03T03:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.742978 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.743031 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.743045 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.743066 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.743080 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:28Z","lastTransitionTime":"2026-01-03T03:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.845838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.845893 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.845906 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.845930 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.845947 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:28Z","lastTransitionTime":"2026-01-03T03:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.924052 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:28 crc kubenswrapper[4746]: E0103 03:15:28.924465 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:28 crc kubenswrapper[4746]: E0103 03:15:28.924555 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs podName:28a574f3-8744-4d57-aada-e4b328244e19 nodeName:}" failed. No retries permitted until 2026-01-03 03:15:44.924530411 +0000 UTC m=+64.774420756 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs") pod "network-metrics-daemon-57tv2" (UID: "28a574f3-8744-4d57-aada-e4b328244e19") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.949796 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.949863 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.949882 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.949942 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:28 crc kubenswrapper[4746]: I0103 03:15:28.949965 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:28Z","lastTransitionTime":"2026-01-03T03:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.053401 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.053470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.053490 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.053517 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.053532 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.157082 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.157137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.157156 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.157211 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.157232 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.260682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.260745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.260762 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.260786 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.260802 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.364566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.364641 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.364731 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.364766 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.364790 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.467807 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.467872 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.467886 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.467911 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.467928 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.556784 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.571184 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.571225 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.571237 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.571262 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.571280 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.574391 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.576162 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.591552 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.610906 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.629552 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.653819 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.670838 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.674316 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.674471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.674565 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.674684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.674767 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.693043 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.711981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.712028 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.712043 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.712065 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.712080 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.715246 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: E0103 03:15:29.734920 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.738386 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.741011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.741075 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.741094 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.741125 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.741146 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: E0103 03:15:29.767711 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.772049 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.774876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.775039 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.775071 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.775157 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.775235 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: E0103 03:15:29.799393 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.804961 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.805017 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.805034 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.805063 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.805089 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.809045 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:10Z\\\",\\\"message\\\":\\\"ap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 03:15:10.645784 6205 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nI0103 03:15:10.645792 6205 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nF0103 03:15:10.645796 6205 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: E0103 03:15:29.827991 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.830318 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.833874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.833955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.833982 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.834016 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.834045 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.844424 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: E0103 03:15:29.851390 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: E0103 03:15:29.851839 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.854225 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.854375 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.854460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.854555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.854632 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.866249 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.880411 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.893004 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:29Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.958270 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.958415 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.958493 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.958554 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:29 crc kubenswrapper[4746]: I0103 03:15:29.958607 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:29Z","lastTransitionTime":"2026-01-03T03:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.062054 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.062105 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.062117 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.062135 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.062145 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:30Z","lastTransitionTime":"2026-01-03T03:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.165965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.166039 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.166059 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.166088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.166109 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:30Z","lastTransitionTime":"2026-01-03T03:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.269268 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.269306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.269317 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.269333 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.269344 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:30Z","lastTransitionTime":"2026-01-03T03:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.344082 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.344246 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.344400 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:16:02.344359855 +0000 UTC m=+82.194250200 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.344562 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.344721 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:16:02.344637091 +0000 UTC m=+82.194527426 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.371492 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.371552 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.371568 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.371591 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.371609 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:30Z","lastTransitionTime":"2026-01-03T03:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.446366 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.446441 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.446551 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.446627 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.446747 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:16:02.446724028 +0000 UTC m=+82.296614493 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.446755 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.446792 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.446812 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.446809 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.446846 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.446866 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 03:16:02.446848671 +0000 UTC m=+82.296739016 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.446868 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.446960 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 03:16:02.446935173 +0000 UTC m=+82.296825508 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.464260 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.464314 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.464371 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.464293 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.464441 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.464558 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.464617 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:30 crc kubenswrapper[4746]: E0103 03:15:30.464787 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.473637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.473718 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.473737 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.473760 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.473777 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:30Z","lastTransitionTime":"2026-01-03T03:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.512075 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.529143 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.553316 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.576788 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.576836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.576849 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.576906 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.576925 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:30Z","lastTransitionTime":"2026-01-03T03:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.588827 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:10Z\\\",\\\"message\\\":\\\"ap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 03:15:10.645784 6205 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nI0103 03:15:10.645792 6205 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nF0103 03:15:10.645796 6205 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.613368 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.633432 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.659599 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.676428 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.681393 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.681451 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.681487 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.681516 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.681537 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:30Z","lastTransitionTime":"2026-01-03T03:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.696723 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.715019 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.730236 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.750498 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.764783 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.779581 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.784057 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.784098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.784115 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.784133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.784147 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:30Z","lastTransitionTime":"2026-01-03T03:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.800594 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.814225 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.832934 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:30Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.887118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.887157 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.887172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.887191 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.887205 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:30Z","lastTransitionTime":"2026-01-03T03:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.990553 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.990627 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.990650 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.990714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:30 crc kubenswrapper[4746]: I0103 03:15:30.990735 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:30Z","lastTransitionTime":"2026-01-03T03:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.094555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.094607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.094620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.094644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.094673 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:31Z","lastTransitionTime":"2026-01-03T03:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.198229 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.198296 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.198319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.198348 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.198371 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:31Z","lastTransitionTime":"2026-01-03T03:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.301687 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.301731 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.301765 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.301781 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.301791 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:31Z","lastTransitionTime":"2026-01-03T03:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.404451 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.404502 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.404511 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.404526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.404535 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:31Z","lastTransitionTime":"2026-01-03T03:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.506749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.506820 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.506843 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.506872 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.506892 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:31Z","lastTransitionTime":"2026-01-03T03:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.610332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.610386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.610402 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.610424 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.610443 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:31Z","lastTransitionTime":"2026-01-03T03:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.712792 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.712831 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.712840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.712852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.712862 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:31Z","lastTransitionTime":"2026-01-03T03:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.815787 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.815833 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.815844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.815858 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.815867 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:31Z","lastTransitionTime":"2026-01-03T03:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.918382 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.918463 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.918477 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.918494 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:31 crc kubenswrapper[4746]: I0103 03:15:31.918504 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:31Z","lastTransitionTime":"2026-01-03T03:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.020865 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.020906 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.020918 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.020933 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.020944 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:32Z","lastTransitionTime":"2026-01-03T03:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.123119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.123168 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.123177 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.123192 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.123201 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:32Z","lastTransitionTime":"2026-01-03T03:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.226088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.226127 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.226140 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.226182 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.226195 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:32Z","lastTransitionTime":"2026-01-03T03:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.328418 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.328461 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.328473 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.328489 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.328499 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:32Z","lastTransitionTime":"2026-01-03T03:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.431153 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.431198 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.431207 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.431222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.431232 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:32Z","lastTransitionTime":"2026-01-03T03:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.464568 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.464603 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.464622 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.464784 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:32 crc kubenswrapper[4746]: E0103 03:15:32.465415 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:32 crc kubenswrapper[4746]: E0103 03:15:32.465485 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:32 crc kubenswrapper[4746]: E0103 03:15:32.465614 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.465914 4746 scope.go:117] "RemoveContainer" containerID="42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9" Jan 03 03:15:32 crc kubenswrapper[4746]: E0103 03:15:32.466650 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.533492 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.533798 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.533806 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.533819 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.533827 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:32Z","lastTransitionTime":"2026-01-03T03:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.640040 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.640081 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.640093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.640110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.640121 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:32Z","lastTransitionTime":"2026-01-03T03:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.743503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.743543 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.743554 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.743577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.743591 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:32Z","lastTransitionTime":"2026-01-03T03:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.846979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.847027 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.847041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.847062 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.847083 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:32Z","lastTransitionTime":"2026-01-03T03:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.866571 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/1.log" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.869824 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerStarted","Data":"3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930"} Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.870551 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.928750 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:32Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.945293 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:32Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.950066 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.950109 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.950123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.950146 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.950162 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:32Z","lastTransitionTime":"2026-01-03T03:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.962560 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:32Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.980033 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:32Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:32 crc kubenswrapper[4746]: I0103 03:15:32.994543 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:32Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.008075 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.026027 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.041761 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.053626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.053706 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.053722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.053749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.053765 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:33Z","lastTransitionTime":"2026-01-03T03:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.053867 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.069951 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.091327 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.112753 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.127861 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.143900 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.156732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.156788 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.156803 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.156823 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.156837 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:33Z","lastTransitionTime":"2026-01-03T03:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.163766 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:10Z\\\",\\\"message\\\":\\\"ap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 03:15:10.645784 6205 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nI0103 03:15:10.645792 6205 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nF0103 03:15:10.645796 6205 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.178894 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.192835 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.259201 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.259259 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.259272 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.259297 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.259318 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:33Z","lastTransitionTime":"2026-01-03T03:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.362237 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.362275 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.362285 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.362300 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.362314 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:33Z","lastTransitionTime":"2026-01-03T03:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.464943 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.465239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.465325 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.465390 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.465458 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:33Z","lastTransitionTime":"2026-01-03T03:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.569254 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.569328 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.569348 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.569378 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.569427 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:33Z","lastTransitionTime":"2026-01-03T03:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.672643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.672731 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.672744 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.672769 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.672781 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:33Z","lastTransitionTime":"2026-01-03T03:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.776737 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.776832 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.776852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.776910 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.776932 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:33Z","lastTransitionTime":"2026-01-03T03:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.876098 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/2.log" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.878788 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/1.log" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.880789 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.880859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.880873 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.880892 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.880907 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:33Z","lastTransitionTime":"2026-01-03T03:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.885897 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930" exitCode=1 Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.885970 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930"} Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.886063 4746 scope.go:117] "RemoveContainer" containerID="42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.887380 4746 scope.go:117] "RemoveContainer" containerID="3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930" Jan 03 03:15:33 crc kubenswrapper[4746]: E0103 03:15:33.887877 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.916484 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.931084 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.947903 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.967503 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.991200 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.993609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.993712 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.993732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.993763 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:33 crc kubenswrapper[4746]: I0103 03:15:33.993784 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:33Z","lastTransitionTime":"2026-01-03T03:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.008876 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.024962 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.042253 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.056630 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.073283 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.093960 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.097539 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.097606 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.097630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.097690 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.097714 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:34Z","lastTransitionTime":"2026-01-03T03:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.108376 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.123504 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.142428 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.161772 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42c62ac32647f684af2887d1af4cfac0709309cd6a35de5c4eef20d4fcc690e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:10Z\\\",\\\"message\\\":\\\"ap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-controller-manager-operator/metrics]} name:Service_openshift-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.58:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4607c9b7-15f9-4ba0-86e5-0021ba7e4488}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0103 03:15:10.645784 6205 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nI0103 03:15:10.645792 6205 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-rzrbx\\\\nF0103 03:15:10.645796 6205 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed cal\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:33Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0103 03:15:33.460365 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0103 03:15:33.460413 6474 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF0103 03:15:33.460412 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z]\\\\nI0103 03:15:33.460422 6474 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identit\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.178803 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.190167 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.201276 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.201344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.201366 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.201396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.201417 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:34Z","lastTransitionTime":"2026-01-03T03:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.304349 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.304415 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.304436 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.304460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.304478 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:34Z","lastTransitionTime":"2026-01-03T03:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.407596 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.407689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.407708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.407740 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.407765 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:34Z","lastTransitionTime":"2026-01-03T03:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.464065 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.464202 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.464216 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:34 crc kubenswrapper[4746]: E0103 03:15:34.464313 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:34 crc kubenswrapper[4746]: E0103 03:15:34.464464 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.464588 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:34 crc kubenswrapper[4746]: E0103 03:15:34.464643 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:34 crc kubenswrapper[4746]: E0103 03:15:34.464814 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.510944 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.511012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.511031 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.511059 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.511077 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:34Z","lastTransitionTime":"2026-01-03T03:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.614742 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.614823 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.614844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.614872 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.614890 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:34Z","lastTransitionTime":"2026-01-03T03:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.718592 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.718693 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.718712 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.718738 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.718760 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:34Z","lastTransitionTime":"2026-01-03T03:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.821613 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.822046 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.822216 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.822409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.822773 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:34Z","lastTransitionTime":"2026-01-03T03:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.892859 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/2.log" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.898782 4746 scope.go:117] "RemoveContainer" containerID="3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930" Jan 03 03:15:34 crc kubenswrapper[4746]: E0103 03:15:34.899107 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.922239 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.926097 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.926153 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.926171 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.926202 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.926224 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:34Z","lastTransitionTime":"2026-01-03T03:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.945493 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.963144 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:34 crc kubenswrapper[4746]: I0103 03:15:34.985070 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:34Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.006996 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.030169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.030241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.030262 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.030295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.030317 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:35Z","lastTransitionTime":"2026-01-03T03:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.031203 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.051593 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.072241 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.090209 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.111480 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.128855 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.139176 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.139282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.139322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.139363 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.139402 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:35Z","lastTransitionTime":"2026-01-03T03:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.153314 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.186848 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:33Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0103 03:15:33.460365 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0103 03:15:33.460413 6474 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF0103 03:15:33.460412 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z]\\\\nI0103 03:15:33.460422 6474 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identit\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.204965 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.222940 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.237490 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.243087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.243142 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.243157 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.243183 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.243204 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:35Z","lastTransitionTime":"2026-01-03T03:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.252930 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:35Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.346407 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.346501 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.346523 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.346562 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.346584 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:35Z","lastTransitionTime":"2026-01-03T03:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.450230 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.450330 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.450350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.450381 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.450399 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:35Z","lastTransitionTime":"2026-01-03T03:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.553230 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.553288 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.553305 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.553331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.553350 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:35Z","lastTransitionTime":"2026-01-03T03:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.656005 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.656081 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.656119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.656148 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.656173 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:35Z","lastTransitionTime":"2026-01-03T03:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.759584 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.759722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.759744 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.759767 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.759785 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:35Z","lastTransitionTime":"2026-01-03T03:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.863359 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.863427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.863463 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.863495 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.863517 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:35Z","lastTransitionTime":"2026-01-03T03:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.966804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.966884 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.966904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.966933 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:35 crc kubenswrapper[4746]: I0103 03:15:35.966957 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:35Z","lastTransitionTime":"2026-01-03T03:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.070371 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.070436 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.070453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.070480 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.070497 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:36Z","lastTransitionTime":"2026-01-03T03:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.173547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.173607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.173688 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.173725 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.173752 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:36Z","lastTransitionTime":"2026-01-03T03:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.277426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.277484 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.277500 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.277529 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.277553 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:36Z","lastTransitionTime":"2026-01-03T03:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.380061 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.380111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.380128 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.380150 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.380170 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:36Z","lastTransitionTime":"2026-01-03T03:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.464436 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.464681 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:36 crc kubenswrapper[4746]: E0103 03:15:36.464776 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.464799 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.464829 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:36 crc kubenswrapper[4746]: E0103 03:15:36.465109 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:36 crc kubenswrapper[4746]: E0103 03:15:36.465396 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:36 crc kubenswrapper[4746]: E0103 03:15:36.465461 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.483722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.483824 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.483843 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.483908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.483928 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:36Z","lastTransitionTime":"2026-01-03T03:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.588087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.588160 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.588179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.588210 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.588229 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:36Z","lastTransitionTime":"2026-01-03T03:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.861193 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.861257 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.861273 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.861295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.861308 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:36Z","lastTransitionTime":"2026-01-03T03:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.964522 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.964598 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.964618 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.964646 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:36 crc kubenswrapper[4746]: I0103 03:15:36.964707 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:36Z","lastTransitionTime":"2026-01-03T03:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.068049 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.068119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.068140 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.068166 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.068188 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:37Z","lastTransitionTime":"2026-01-03T03:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.171601 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.171685 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.171698 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.171719 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.171730 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:37Z","lastTransitionTime":"2026-01-03T03:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.275339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.275386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.275399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.275419 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.275433 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:37Z","lastTransitionTime":"2026-01-03T03:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.378605 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.378693 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.378714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.378745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.378767 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:37Z","lastTransitionTime":"2026-01-03T03:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.482876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.483193 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.483329 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.483485 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.483637 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:37Z","lastTransitionTime":"2026-01-03T03:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.590553 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.590693 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.590772 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.590814 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.590851 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:37Z","lastTransitionTime":"2026-01-03T03:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.694643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.695148 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.695347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.695522 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.695683 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:37Z","lastTransitionTime":"2026-01-03T03:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.799904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.799975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.799996 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.800024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.800045 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:37Z","lastTransitionTime":"2026-01-03T03:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.904566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.904653 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.904714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.904749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:37 crc kubenswrapper[4746]: I0103 03:15:37.904770 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:37Z","lastTransitionTime":"2026-01-03T03:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.007769 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.007840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.007861 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.007895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.007916 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:38Z","lastTransitionTime":"2026-01-03T03:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.111021 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.111088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.111107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.111138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.111161 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:38Z","lastTransitionTime":"2026-01-03T03:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.215409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.215514 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.215545 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.215589 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.215622 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:38Z","lastTransitionTime":"2026-01-03T03:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.320354 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.320435 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.320457 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.320486 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.320508 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:38Z","lastTransitionTime":"2026-01-03T03:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.423896 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.423989 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.424013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.424047 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.424075 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:38Z","lastTransitionTime":"2026-01-03T03:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.464244 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.464376 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.464440 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:38 crc kubenswrapper[4746]: E0103 03:15:38.464491 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:38 crc kubenswrapper[4746]: E0103 03:15:38.464556 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:38 crc kubenswrapper[4746]: E0103 03:15:38.464840 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.464947 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:38 crc kubenswrapper[4746]: E0103 03:15:38.465106 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.527805 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.527876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.527896 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.527928 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.527954 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:38Z","lastTransitionTime":"2026-01-03T03:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.632323 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.632406 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.632434 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.632466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.632491 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:38Z","lastTransitionTime":"2026-01-03T03:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.736454 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.736545 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.736573 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.736609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.736631 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:38Z","lastTransitionTime":"2026-01-03T03:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.838984 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.839050 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.839084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.839103 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.839114 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:38Z","lastTransitionTime":"2026-01-03T03:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.945448 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.945514 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.945590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.945620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:38 crc kubenswrapper[4746]: I0103 03:15:38.945640 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:38Z","lastTransitionTime":"2026-01-03T03:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.048934 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.049000 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.049019 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.049050 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.049069 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:39Z","lastTransitionTime":"2026-01-03T03:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.152884 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.152938 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.152948 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.152965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.152977 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:39Z","lastTransitionTime":"2026-01-03T03:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.256282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.256338 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.256355 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.256379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.256396 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:39Z","lastTransitionTime":"2026-01-03T03:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.358807 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.358867 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.358876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.358891 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.358900 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:39Z","lastTransitionTime":"2026-01-03T03:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.461697 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.461771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.461782 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.461821 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.461836 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:39Z","lastTransitionTime":"2026-01-03T03:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.565746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.565830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.565855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.565888 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.565914 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:39Z","lastTransitionTime":"2026-01-03T03:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.669194 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.669268 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.669287 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.669316 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.669345 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:39Z","lastTransitionTime":"2026-01-03T03:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.772715 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.772794 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.772815 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.772843 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.772868 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:39Z","lastTransitionTime":"2026-01-03T03:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.877295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.877407 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.877428 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.877453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.877474 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:39Z","lastTransitionTime":"2026-01-03T03:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.980394 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.980450 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.980469 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.980510 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:39 crc kubenswrapper[4746]: I0103 03:15:39.980568 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:39Z","lastTransitionTime":"2026-01-03T03:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.090219 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.090287 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.090310 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.090351 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.090375 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.193888 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.193947 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.193961 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.193984 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.193997 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.208098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.208134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.208147 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.208161 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.208172 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: E0103 03:15:40.227268 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.231293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.231329 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.231340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.231355 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.231366 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: E0103 03:15:40.243498 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.247804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.247841 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.247856 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.247876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.247889 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: E0103 03:15:40.264886 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.269470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.269539 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.269553 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.269571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.270026 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: E0103 03:15:40.286834 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.291368 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.291410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.291424 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.291447 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.291461 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: E0103 03:15:40.308448 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: E0103 03:15:40.308695 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.310728 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.310784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.310801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.310826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.310846 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.414126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.414201 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.414220 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.414251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.414272 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.464092 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.464169 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.464169 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:40 crc kubenswrapper[4746]: E0103 03:15:40.464301 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.464430 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:40 crc kubenswrapper[4746]: E0103 03:15:40.464494 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:40 crc kubenswrapper[4746]: E0103 03:15:40.464564 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:40 crc kubenswrapper[4746]: E0103 03:15:40.464677 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.507750 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.517900 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.517967 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.517979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.517999 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.518028 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.519706 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.529863 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.543584 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.554352 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.565719 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.579812 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.589834 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.603598 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.615316 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.621484 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.621512 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.621524 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.621538 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.621547 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.626687 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.643520 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:33Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0103 03:15:33.460365 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0103 03:15:33.460413 6474 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF0103 03:15:33.460412 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z]\\\\nI0103 03:15:33.460422 6474 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identit\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.656962 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.665926 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.681130 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.690624 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.703682 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:40Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.723518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.723559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.723569 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.723586 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.723597 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.826030 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.826071 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.826080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.826093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.826103 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.928624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.928669 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.928679 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.928692 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:40 crc kubenswrapper[4746]: I0103 03:15:40.928701 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:40Z","lastTransitionTime":"2026-01-03T03:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.031053 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.031088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.031097 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.031126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.031137 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:41Z","lastTransitionTime":"2026-01-03T03:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.134515 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.134574 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.134592 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.134617 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.134635 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:41Z","lastTransitionTime":"2026-01-03T03:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.237502 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.237554 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.237574 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.237595 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.237613 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:41Z","lastTransitionTime":"2026-01-03T03:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.340350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.340388 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.340398 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.340414 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.340426 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:41Z","lastTransitionTime":"2026-01-03T03:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.442771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.443321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.443343 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.443367 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.443383 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:41Z","lastTransitionTime":"2026-01-03T03:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.545606 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.545709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.545731 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.545769 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.545785 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:41Z","lastTransitionTime":"2026-01-03T03:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.648700 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.648772 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.648796 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.648830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.648858 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:41Z","lastTransitionTime":"2026-01-03T03:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.752394 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.752453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.752471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.752495 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.752513 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:41Z","lastTransitionTime":"2026-01-03T03:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.856190 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.856231 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.856240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.856255 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.856264 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:41Z","lastTransitionTime":"2026-01-03T03:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.960019 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.960088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.960111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.960152 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:41 crc kubenswrapper[4746]: I0103 03:15:41.960176 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:41Z","lastTransitionTime":"2026-01-03T03:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.063593 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.063697 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.063724 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.063758 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.063782 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:42Z","lastTransitionTime":"2026-01-03T03:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.166669 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.166708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.166716 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.166730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.166739 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:42Z","lastTransitionTime":"2026-01-03T03:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.269173 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.269223 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.269239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.269261 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.269280 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:42Z","lastTransitionTime":"2026-01-03T03:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.371716 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.371784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.371803 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.371830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.371850 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:42Z","lastTransitionTime":"2026-01-03T03:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.468648 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:42 crc kubenswrapper[4746]: E0103 03:15:42.468863 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.469168 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:42 crc kubenswrapper[4746]: E0103 03:15:42.469271 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.469476 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:42 crc kubenswrapper[4746]: E0103 03:15:42.469573 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.470036 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:42 crc kubenswrapper[4746]: E0103 03:15:42.470133 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.475729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.475789 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.475810 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.475838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.475859 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:42Z","lastTransitionTime":"2026-01-03T03:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.579428 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.579478 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.579495 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.579519 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.579540 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:42Z","lastTransitionTime":"2026-01-03T03:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.682186 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.682231 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.682248 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.682270 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.682286 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:42Z","lastTransitionTime":"2026-01-03T03:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.785266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.785316 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.785332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.785356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.785373 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:42Z","lastTransitionTime":"2026-01-03T03:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.888734 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.888797 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.888808 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.888846 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.888860 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:42Z","lastTransitionTime":"2026-01-03T03:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.992865 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.992960 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.992988 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.993023 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:42 crc kubenswrapper[4746]: I0103 03:15:42.993049 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:42Z","lastTransitionTime":"2026-01-03T03:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.096339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.096403 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.096463 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.096497 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.096522 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:43Z","lastTransitionTime":"2026-01-03T03:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.200044 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.200127 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.200147 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.200192 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.200231 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:43Z","lastTransitionTime":"2026-01-03T03:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.302852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.302906 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.302924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.302948 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.302969 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:43Z","lastTransitionTime":"2026-01-03T03:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.405688 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.405755 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.405774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.405801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.405820 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:43Z","lastTransitionTime":"2026-01-03T03:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.509041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.509097 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.509114 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.509138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.509154 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:43Z","lastTransitionTime":"2026-01-03T03:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.612113 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.612301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.612322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.612379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.612397 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:43Z","lastTransitionTime":"2026-01-03T03:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.716079 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.716161 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.716192 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.716212 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.716227 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:43Z","lastTransitionTime":"2026-01-03T03:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.819323 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.819426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.819439 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.819466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.819480 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:43Z","lastTransitionTime":"2026-01-03T03:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.922166 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.922264 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.922290 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.922338 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:43 crc kubenswrapper[4746]: I0103 03:15:43.922367 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:43Z","lastTransitionTime":"2026-01-03T03:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.025192 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.025243 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.025266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.025295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.025313 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:44Z","lastTransitionTime":"2026-01-03T03:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.127844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.127884 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.127892 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.127907 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.127919 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:44Z","lastTransitionTime":"2026-01-03T03:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.229795 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.229840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.229855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.229873 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.229886 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:44Z","lastTransitionTime":"2026-01-03T03:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.336838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.336883 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.336893 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.336914 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.336924 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:44Z","lastTransitionTime":"2026-01-03T03:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.439816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.439857 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.439869 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.439886 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.439897 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:44Z","lastTransitionTime":"2026-01-03T03:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.464332 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.464375 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.464450 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:44 crc kubenswrapper[4746]: E0103 03:15:44.464456 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.464391 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:44 crc kubenswrapper[4746]: E0103 03:15:44.464593 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:44 crc kubenswrapper[4746]: E0103 03:15:44.464716 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:44 crc kubenswrapper[4746]: E0103 03:15:44.464913 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.543122 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.543154 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.543162 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.543179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.543189 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:44Z","lastTransitionTime":"2026-01-03T03:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.645978 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.646012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.646024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.646040 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.646051 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:44Z","lastTransitionTime":"2026-01-03T03:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.749179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.749225 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.749239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.749260 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.749273 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:44Z","lastTransitionTime":"2026-01-03T03:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.852925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.852987 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.853013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.853045 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.853070 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:44Z","lastTransitionTime":"2026-01-03T03:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.947217 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:44 crc kubenswrapper[4746]: E0103 03:15:44.947424 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:44 crc kubenswrapper[4746]: E0103 03:15:44.947507 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs podName:28a574f3-8744-4d57-aada-e4b328244e19 nodeName:}" failed. No retries permitted until 2026-01-03 03:16:16.947488251 +0000 UTC m=+96.797378556 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs") pod "network-metrics-daemon-57tv2" (UID: "28a574f3-8744-4d57-aada-e4b328244e19") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.955685 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.955720 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.955728 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.955741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:44 crc kubenswrapper[4746]: I0103 03:15:44.955750 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:44Z","lastTransitionTime":"2026-01-03T03:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.059901 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.059990 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.060018 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.060051 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.060081 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:45Z","lastTransitionTime":"2026-01-03T03:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.162243 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.162285 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.162294 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.162310 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.162322 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:45Z","lastTransitionTime":"2026-01-03T03:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.265461 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.265539 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.265562 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.265590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.265610 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:45Z","lastTransitionTime":"2026-01-03T03:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.368454 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.368505 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.368516 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.368536 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.368551 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:45Z","lastTransitionTime":"2026-01-03T03:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.473151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.473221 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.473230 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.473263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.473274 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:45Z","lastTransitionTime":"2026-01-03T03:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.479764 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.575977 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.576047 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.576065 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.576092 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.576112 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:45Z","lastTransitionTime":"2026-01-03T03:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.678979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.679019 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.679027 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.679047 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.679059 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:45Z","lastTransitionTime":"2026-01-03T03:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.782184 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.782242 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.782251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.782266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.782276 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:45Z","lastTransitionTime":"2026-01-03T03:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.884742 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.884809 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.884825 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.884845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.884862 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:45Z","lastTransitionTime":"2026-01-03T03:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.987281 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.987334 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.987348 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.987371 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:45 crc kubenswrapper[4746]: I0103 03:15:45.987386 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:45Z","lastTransitionTime":"2026-01-03T03:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.090871 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.090922 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.090934 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.090953 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.090970 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:46Z","lastTransitionTime":"2026-01-03T03:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.193720 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.193764 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.193778 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.193795 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.193807 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:46Z","lastTransitionTime":"2026-01-03T03:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.296427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.296532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.296591 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.296611 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.296622 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:46Z","lastTransitionTime":"2026-01-03T03:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.399099 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.399142 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.399152 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.399167 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.399205 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:46Z","lastTransitionTime":"2026-01-03T03:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.463869 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.463956 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.463956 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:46 crc kubenswrapper[4746]: E0103 03:15:46.464080 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.464170 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:46 crc kubenswrapper[4746]: E0103 03:15:46.464429 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:46 crc kubenswrapper[4746]: E0103 03:15:46.465366 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:46 crc kubenswrapper[4746]: E0103 03:15:46.465440 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.465943 4746 scope.go:117] "RemoveContainer" containerID="3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930" Jan 03 03:15:46 crc kubenswrapper[4746]: E0103 03:15:46.466279 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.502356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.502404 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.502429 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.502460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.502484 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:46Z","lastTransitionTime":"2026-01-03T03:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.605687 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.605722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.605737 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.605750 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.605760 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:46Z","lastTransitionTime":"2026-01-03T03:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.709192 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.709262 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.709281 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.709308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.709328 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:46Z","lastTransitionTime":"2026-01-03T03:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.812119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.812167 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.812181 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.812206 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.812221 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:46Z","lastTransitionTime":"2026-01-03T03:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.915409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.915469 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.915488 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.915515 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:46 crc kubenswrapper[4746]: I0103 03:15:46.915534 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:46Z","lastTransitionTime":"2026-01-03T03:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.018270 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.018311 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.018321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.018338 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.018351 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:47Z","lastTransitionTime":"2026-01-03T03:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.121138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.121200 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.121221 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.121247 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.121270 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:47Z","lastTransitionTime":"2026-01-03T03:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.224722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.224778 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.224789 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.224808 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.224821 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:47Z","lastTransitionTime":"2026-01-03T03:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.328638 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.328717 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.328731 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.328753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.328764 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:47Z","lastTransitionTime":"2026-01-03T03:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.432416 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.432470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.432479 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.432497 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.432508 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:47Z","lastTransitionTime":"2026-01-03T03:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.535439 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.535479 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.535492 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.535512 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.535528 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:47Z","lastTransitionTime":"2026-01-03T03:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.638693 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.638758 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.638776 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.638803 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.638821 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:47Z","lastTransitionTime":"2026-01-03T03:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.742362 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.742413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.742427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.742464 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.742474 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:47Z","lastTransitionTime":"2026-01-03T03:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.845558 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.845614 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.845631 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.845681 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.845701 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:47Z","lastTransitionTime":"2026-01-03T03:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.946847 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-plg55_7938adea-5f3a-4bfa-8776-f8b06ce7219e/kube-multus/0.log" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.946911 4746 generic.go:334] "Generic (PLEG): container finished" podID="7938adea-5f3a-4bfa-8776-f8b06ce7219e" containerID="7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277" exitCode=1 Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.946973 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-plg55" event={"ID":"7938adea-5f3a-4bfa-8776-f8b06ce7219e","Type":"ContainerDied","Data":"7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277"} Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.947860 4746 scope.go:117] "RemoveContainer" containerID="7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.948024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.948053 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.948063 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.948078 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.948088 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:47Z","lastTransitionTime":"2026-01-03T03:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.961008 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:47Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.978168 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:47Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:47 crc kubenswrapper[4746]: I0103 03:15:47.996935 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:47Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.012036 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.027508 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.040817 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.051859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.051894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.051909 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.051930 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.051941 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:48Z","lastTransitionTime":"2026-01-03T03:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.052426 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.071513 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:33Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0103 03:15:33.460365 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0103 03:15:33.460413 6474 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF0103 03:15:33.460412 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z]\\\\nI0103 03:15:33.460422 6474 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identit\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.090600 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.107213 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.122045 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.135432 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.153569 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:47Z\\\",\\\"message\\\":\\\"2026-01-03T03:15:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca\\\\n2026-01-03T03:15:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca to /host/opt/cni/bin/\\\\n2026-01-03T03:15:01Z [verbose] multus-daemon started\\\\n2026-01-03T03:15:01Z [verbose] Readiness Indicator file check\\\\n2026-01-03T03:15:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.155986 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.157895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.157927 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.157962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.157980 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:48Z","lastTransitionTime":"2026-01-03T03:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.170994 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.184949 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc47990-827b-4c2d-be19-ade93a42e533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e68c157a0cda26a4e1ee7910c94e1a7f76477aec7bfd2f0909efac17943dffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.200007 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.214060 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.229526 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.261039 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.261102 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.261122 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.261146 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.261161 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:48Z","lastTransitionTime":"2026-01-03T03:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.364888 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.364941 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.364952 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.364972 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.364983 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:48Z","lastTransitionTime":"2026-01-03T03:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.464255 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.464329 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.464337 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.464472 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:48 crc kubenswrapper[4746]: E0103 03:15:48.464469 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:48 crc kubenswrapper[4746]: E0103 03:15:48.464553 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:48 crc kubenswrapper[4746]: E0103 03:15:48.464609 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:48 crc kubenswrapper[4746]: E0103 03:15:48.464742 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.467702 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.467746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.467759 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.467777 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.467790 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:48Z","lastTransitionTime":"2026-01-03T03:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.569891 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.569926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.569938 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.569955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.569964 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:48Z","lastTransitionTime":"2026-01-03T03:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.672189 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.672320 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.672345 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.672370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.672391 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:48Z","lastTransitionTime":"2026-01-03T03:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.775392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.775450 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.775460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.775475 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.775489 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:48Z","lastTransitionTime":"2026-01-03T03:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.878061 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.878100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.878112 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.878129 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.878139 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:48Z","lastTransitionTime":"2026-01-03T03:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.951730 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-plg55_7938adea-5f3a-4bfa-8776-f8b06ce7219e/kube-multus/0.log" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.951795 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-plg55" event={"ID":"7938adea-5f3a-4bfa-8776-f8b06ce7219e","Type":"ContainerStarted","Data":"46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93"} Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.964605 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.975938 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.980504 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.980536 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.980547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.980566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.980575 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:48Z","lastTransitionTime":"2026-01-03T03:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:48 crc kubenswrapper[4746]: I0103 03:15:48.987056 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.000511 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:48Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.010743 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc47990-827b-4c2d-be19-ade93a42e533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e68c157a0cda26a4e1ee7910c94e1a7f76477aec7bfd2f0909efac17943dffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.021606 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.033588 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.048823 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.064853 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:47Z\\\",\\\"message\\\":\\\"2026-01-03T03:15:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca\\\\n2026-01-03T03:15:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca to /host/opt/cni/bin/\\\\n2026-01-03T03:15:01Z [verbose] multus-daemon started\\\\n2026-01-03T03:15:01Z [verbose] Readiness Indicator file check\\\\n2026-01-03T03:15:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.078613 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.082622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.082688 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.082711 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.082728 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.082739 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:49Z","lastTransitionTime":"2026-01-03T03:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.092088 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.105785 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.123735 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.138927 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.150677 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.168359 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:33Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0103 03:15:33.460365 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0103 03:15:33.460413 6474 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF0103 03:15:33.460412 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z]\\\\nI0103 03:15:33.460422 6474 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identit\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.185125 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.186836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.186862 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.186908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.186929 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.186941 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:49Z","lastTransitionTime":"2026-01-03T03:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.199800 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:49Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.289613 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.289648 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.289672 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.289691 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.289703 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:49Z","lastTransitionTime":"2026-01-03T03:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.392781 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.392828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.392837 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.392860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.392870 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:49Z","lastTransitionTime":"2026-01-03T03:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.495979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.496028 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.496043 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.496067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.496086 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:49Z","lastTransitionTime":"2026-01-03T03:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.598022 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.598059 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.598067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.598083 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.598093 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:49Z","lastTransitionTime":"2026-01-03T03:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.701352 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.701414 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.701430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.701453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.701474 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:49Z","lastTransitionTime":"2026-01-03T03:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.804991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.805032 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.805041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.805056 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.805067 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:49Z","lastTransitionTime":"2026-01-03T03:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.908203 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.908245 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.908254 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.908271 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:49 crc kubenswrapper[4746]: I0103 03:15:49.908283 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:49Z","lastTransitionTime":"2026-01-03T03:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.010418 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.010456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.010465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.010480 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.010490 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.112794 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.112845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.112858 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.112877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.112888 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.214903 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.214934 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.214943 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.214955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.214965 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.316789 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.316850 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.316859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.316873 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.316883 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.419197 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.419254 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.419263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.419277 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.419288 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.464844 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.464952 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.464973 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.465113 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:50 crc kubenswrapper[4746]: E0103 03:15:50.465114 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:50 crc kubenswrapper[4746]: E0103 03:15:50.465236 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:50 crc kubenswrapper[4746]: E0103 03:15:50.465393 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:50 crc kubenswrapper[4746]: E0103 03:15:50.465596 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.489676 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:33Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0103 03:15:33.460365 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0103 03:15:33.460413 6474 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF0103 03:15:33.460412 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z]\\\\nI0103 03:15:33.460422 6474 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identit\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.511816 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.521979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.522046 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.522060 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.522082 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.522122 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.523485 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.538461 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.556480 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.571785 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.585296 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.605122 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.617800 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.620339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.620376 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.620385 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.620405 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.620419 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.631941 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: E0103 03:15:50.635104 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.646902 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.646961 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.646975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.646996 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.647008 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.658832 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: E0103 03:15:50.666947 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.671876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.671907 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.671918 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.671933 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.671942 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.693779 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:47Z\\\",\\\"message\\\":\\\"2026-01-03T03:15:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca\\\\n2026-01-03T03:15:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca to /host/opt/cni/bin/\\\\n2026-01-03T03:15:01Z [verbose] multus-daemon started\\\\n2026-01-03T03:15:01Z [verbose] Readiness Indicator file check\\\\n2026-01-03T03:15:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: E0103 03:15:50.693835 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.702087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.702153 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.702169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.702191 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.702212 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.717168 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: E0103 03:15:50.720720 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.724828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.724850 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.724858 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.724874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.724884 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.732082 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc47990-827b-4c2d-be19-ade93a42e533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e68c157a0cda26a4e1ee7910c94e1a7f76477aec7bfd2f0909efac17943dffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: E0103 03:15:50.740992 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: E0103 03:15:50.741099 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.742442 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.742461 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.742470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.742483 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.742492 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.748139 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.765496 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.776557 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.787859 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:50Z is after 2025-08-24T17:21:41Z" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.845174 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.845210 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.845219 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.845231 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.845241 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.946737 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.946766 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.946774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.946787 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:50 crc kubenswrapper[4746]: I0103 03:15:50.946796 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:50Z","lastTransitionTime":"2026-01-03T03:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.049394 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.049430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.049438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.049454 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.049463 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:51Z","lastTransitionTime":"2026-01-03T03:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.151634 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.151692 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.151701 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.151715 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.151723 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:51Z","lastTransitionTime":"2026-01-03T03:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.253746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.253784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.253793 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.253805 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.253813 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:51Z","lastTransitionTime":"2026-01-03T03:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.356263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.356308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.356317 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.356330 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.356339 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:51Z","lastTransitionTime":"2026-01-03T03:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.457853 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.457898 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.457911 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.457931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.457943 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:51Z","lastTransitionTime":"2026-01-03T03:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.560508 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.560554 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.560569 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.560588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.560601 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:51Z","lastTransitionTime":"2026-01-03T03:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.662704 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.662749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.662759 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.662773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.662783 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:51Z","lastTransitionTime":"2026-01-03T03:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.765503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.765545 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.765556 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.765571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.765583 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:51Z","lastTransitionTime":"2026-01-03T03:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.868334 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.868390 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.868407 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.868442 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.868457 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:51Z","lastTransitionTime":"2026-01-03T03:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.971454 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.971506 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.971516 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.971534 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:51 crc kubenswrapper[4746]: I0103 03:15:51.971546 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:51Z","lastTransitionTime":"2026-01-03T03:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.074637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.074733 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.074758 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.074827 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.074853 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:52Z","lastTransitionTime":"2026-01-03T03:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.177642 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.177730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.177746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.177771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.177789 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:52Z","lastTransitionTime":"2026-01-03T03:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.280787 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.280842 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.280857 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.280880 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.280897 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:52Z","lastTransitionTime":"2026-01-03T03:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.382905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.382947 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.382959 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.382974 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.382987 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:52Z","lastTransitionTime":"2026-01-03T03:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.464903 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.464946 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.464975 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:52 crc kubenswrapper[4746]: E0103 03:15:52.465098 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.465147 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:52 crc kubenswrapper[4746]: E0103 03:15:52.465203 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:52 crc kubenswrapper[4746]: E0103 03:15:52.465762 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:52 crc kubenswrapper[4746]: E0103 03:15:52.466074 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.484596 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.484621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.484630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.484640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.484650 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:52Z","lastTransitionTime":"2026-01-03T03:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.588067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.588133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.588174 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.588198 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.588215 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:52Z","lastTransitionTime":"2026-01-03T03:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.691911 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.691956 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.691966 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.691980 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.691990 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:52Z","lastTransitionTime":"2026-01-03T03:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.795023 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.795071 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.795079 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.795095 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.795105 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:52Z","lastTransitionTime":"2026-01-03T03:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.897838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.897877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.897885 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.897903 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.897914 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:52Z","lastTransitionTime":"2026-01-03T03:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.999607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.999668 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.999678 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.999691 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:52 crc kubenswrapper[4746]: I0103 03:15:52.999702 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:52Z","lastTransitionTime":"2026-01-03T03:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.101915 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.101977 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.101998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.102026 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.102047 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:53Z","lastTransitionTime":"2026-01-03T03:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.204346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.204378 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.204386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.204399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.204407 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:53Z","lastTransitionTime":"2026-01-03T03:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.307002 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.307062 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.307107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.307132 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.307148 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:53Z","lastTransitionTime":"2026-01-03T03:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.409244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.409271 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.409279 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.409291 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.409302 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:53Z","lastTransitionTime":"2026-01-03T03:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.511386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.511425 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.511434 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.511449 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.511463 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:53Z","lastTransitionTime":"2026-01-03T03:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.613941 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.613989 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.614000 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.614019 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.614031 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:53Z","lastTransitionTime":"2026-01-03T03:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.716342 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.716374 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.716382 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.716396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.716407 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:53Z","lastTransitionTime":"2026-01-03T03:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.819863 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.819896 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.819905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.819919 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.819927 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:53Z","lastTransitionTime":"2026-01-03T03:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.921990 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.922023 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.922031 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.922045 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:53 crc kubenswrapper[4746]: I0103 03:15:53.922055 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:53Z","lastTransitionTime":"2026-01-03T03:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.025311 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.025381 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.025402 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.025439 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.025462 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:54Z","lastTransitionTime":"2026-01-03T03:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.127475 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.127525 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.127542 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.127563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.127579 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:54Z","lastTransitionTime":"2026-01-03T03:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.230136 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.230210 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.230233 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.230263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.230284 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:54Z","lastTransitionTime":"2026-01-03T03:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.333671 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.333735 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.333744 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.333758 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.333767 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:54Z","lastTransitionTime":"2026-01-03T03:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.436471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.436540 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.436558 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.436591 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.436617 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:54Z","lastTransitionTime":"2026-01-03T03:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.463968 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.464003 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.464005 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.464081 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:54 crc kubenswrapper[4746]: E0103 03:15:54.464237 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:54 crc kubenswrapper[4746]: E0103 03:15:54.464452 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:54 crc kubenswrapper[4746]: E0103 03:15:54.464938 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:54 crc kubenswrapper[4746]: E0103 03:15:54.465338 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.539139 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.539209 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.539231 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.539258 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.539279 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:54Z","lastTransitionTime":"2026-01-03T03:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.642864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.642951 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.642964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.642983 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.642994 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:54Z","lastTransitionTime":"2026-01-03T03:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.754563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.754630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.754671 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.754694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.754708 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:54Z","lastTransitionTime":"2026-01-03T03:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.857011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.857058 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.857071 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.857089 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.857102 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:54Z","lastTransitionTime":"2026-01-03T03:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.959844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.959901 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.959916 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.959937 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:54 crc kubenswrapper[4746]: I0103 03:15:54.959952 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:54Z","lastTransitionTime":"2026-01-03T03:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.062813 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.062871 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.062888 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.062912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.062929 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:55Z","lastTransitionTime":"2026-01-03T03:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.165640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.165796 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.165817 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.165841 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.165858 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:55Z","lastTransitionTime":"2026-01-03T03:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.269075 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.269135 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.269152 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.269177 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.269197 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:55Z","lastTransitionTime":"2026-01-03T03:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.371771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.371802 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.371810 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.371823 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.371832 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:55Z","lastTransitionTime":"2026-01-03T03:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.475518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.475595 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.475615 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.475640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.475723 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:55Z","lastTransitionTime":"2026-01-03T03:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.581828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.581896 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.581921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.581949 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.581970 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:55Z","lastTransitionTime":"2026-01-03T03:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.685799 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.685864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.685881 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.685906 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.685924 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:55Z","lastTransitionTime":"2026-01-03T03:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.789292 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.789358 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.789375 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.789399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.789417 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:55Z","lastTransitionTime":"2026-01-03T03:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.892849 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.892921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.892945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.892979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.893002 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:55Z","lastTransitionTime":"2026-01-03T03:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.996611 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.996762 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.996789 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.996820 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:55 crc kubenswrapper[4746]: I0103 03:15:55.996845 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:55Z","lastTransitionTime":"2026-01-03T03:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.099776 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.099840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.099858 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.099883 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.099901 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:56Z","lastTransitionTime":"2026-01-03T03:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.203392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.203465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.203483 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.203543 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.203560 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:56Z","lastTransitionTime":"2026-01-03T03:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.306786 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.306830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.306841 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.306859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.306875 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:56Z","lastTransitionTime":"2026-01-03T03:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.409785 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.410203 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.410351 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.410505 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.410639 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:56Z","lastTransitionTime":"2026-01-03T03:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.464840 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.464934 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.465075 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:56 crc kubenswrapper[4746]: E0103 03:15:56.465032 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.465107 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:56 crc kubenswrapper[4746]: E0103 03:15:56.465267 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:56 crc kubenswrapper[4746]: E0103 03:15:56.465440 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:56 crc kubenswrapper[4746]: E0103 03:15:56.465610 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.513588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.513647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.513707 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.513742 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.513768 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:56Z","lastTransitionTime":"2026-01-03T03:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.617338 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.617400 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.617420 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.617444 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.617462 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:56Z","lastTransitionTime":"2026-01-03T03:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.719938 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.719989 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.720005 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.720028 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.720044 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:56Z","lastTransitionTime":"2026-01-03T03:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.822346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.822379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.822388 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.822402 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.822412 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:56Z","lastTransitionTime":"2026-01-03T03:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.924958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.925038 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.925048 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.925062 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:56 crc kubenswrapper[4746]: I0103 03:15:56.925071 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:56Z","lastTransitionTime":"2026-01-03T03:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.028478 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.028532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.028547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.028567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.028581 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:57Z","lastTransitionTime":"2026-01-03T03:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.131240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.131276 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.131285 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.131300 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.131309 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:57Z","lastTransitionTime":"2026-01-03T03:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.233640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.233697 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.233709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.233727 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.233739 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:57Z","lastTransitionTime":"2026-01-03T03:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.337489 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.337522 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.337533 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.337547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.337557 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:57Z","lastTransitionTime":"2026-01-03T03:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.440503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.440536 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.440545 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.440558 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.440568 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:57Z","lastTransitionTime":"2026-01-03T03:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.544266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.544332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.544351 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.544377 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.544400 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:57Z","lastTransitionTime":"2026-01-03T03:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.648779 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.648834 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.648850 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.648872 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.648890 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:57Z","lastTransitionTime":"2026-01-03T03:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.752967 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.753040 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.753056 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.753087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.753106 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:57Z","lastTransitionTime":"2026-01-03T03:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.855547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.855615 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.855627 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.855645 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.855677 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:57Z","lastTransitionTime":"2026-01-03T03:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.959189 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.959253 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.959271 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.959298 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:57 crc kubenswrapper[4746]: I0103 03:15:57.959319 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:57Z","lastTransitionTime":"2026-01-03T03:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.062581 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.062697 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.062726 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.062757 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.062776 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:58Z","lastTransitionTime":"2026-01-03T03:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.165200 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.165282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.165301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.165325 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.165343 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:58Z","lastTransitionTime":"2026-01-03T03:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.269462 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.269542 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.269571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.269602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.269630 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:58Z","lastTransitionTime":"2026-01-03T03:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.374051 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.374118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.374137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.374163 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.374180 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:58Z","lastTransitionTime":"2026-01-03T03:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.464816 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.464868 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:15:58 crc kubenswrapper[4746]: E0103 03:15:58.464938 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.464996 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.464996 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:15:58 crc kubenswrapper[4746]: E0103 03:15:58.465201 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:15:58 crc kubenswrapper[4746]: E0103 03:15:58.465467 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:15:58 crc kubenswrapper[4746]: E0103 03:15:58.465620 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.466480 4746 scope.go:117] "RemoveContainer" containerID="3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.476369 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.476425 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.476445 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.476470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.476490 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:58Z","lastTransitionTime":"2026-01-03T03:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.578648 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.578695 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.578705 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.578718 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.578730 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:58Z","lastTransitionTime":"2026-01-03T03:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.681576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.681628 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.681645 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.681694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.681712 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:58Z","lastTransitionTime":"2026-01-03T03:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.783987 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.784035 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.784048 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.784068 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.784085 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:58Z","lastTransitionTime":"2026-01-03T03:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.887575 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.887704 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.887726 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.887750 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.887770 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:58Z","lastTransitionTime":"2026-01-03T03:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.991284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.991325 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.991334 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.991351 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:58 crc kubenswrapper[4746]: I0103 03:15:58.991361 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:58Z","lastTransitionTime":"2026-01-03T03:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.094909 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.094955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.094975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.094999 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.095019 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:59Z","lastTransitionTime":"2026-01-03T03:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.198230 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.198297 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.198318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.198346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.198367 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:59Z","lastTransitionTime":"2026-01-03T03:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.301860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.301953 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.301966 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.302003 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.302019 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:59Z","lastTransitionTime":"2026-01-03T03:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.404201 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.404227 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.404234 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.404247 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.404255 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:59Z","lastTransitionTime":"2026-01-03T03:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.506975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.507052 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.507071 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.507100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.507120 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:59Z","lastTransitionTime":"2026-01-03T03:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.609791 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.609836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.609848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.609864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.609874 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:59Z","lastTransitionTime":"2026-01-03T03:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.712752 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.712804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.712816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.712837 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.712851 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:59Z","lastTransitionTime":"2026-01-03T03:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.814882 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.814934 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.814945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.814966 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.814979 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:59Z","lastTransitionTime":"2026-01-03T03:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.918120 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.918153 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.918161 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.918173 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.918182 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:15:59Z","lastTransitionTime":"2026-01-03T03:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.989508 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/3.log" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.990348 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/2.log" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.993556 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add" exitCode=1 Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.993622 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add"} Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.993738 4746 scope.go:117] "RemoveContainer" containerID="3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930" Jan 03 03:15:59 crc kubenswrapper[4746]: I0103 03:15:59.995516 4746 scope.go:117] "RemoveContainer" containerID="73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add" Jan 03 03:15:59 crc kubenswrapper[4746]: E0103 03:15:59.995919 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.014341 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.020709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.020737 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.020749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.020769 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.020783 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:00Z","lastTransitionTime":"2026-01-03T03:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.027893 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.040803 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.054457 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc47990-827b-4c2d-be19-ade93a42e533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e68c157a0cda26a4e1ee7910c94e1a7f76477aec7bfd2f0909efac17943dffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.069054 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.086792 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.105523 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.123848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.123893 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.123905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.123926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.123940 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:00Z","lastTransitionTime":"2026-01-03T03:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.124739 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:47Z\\\",\\\"message\\\":\\\"2026-01-03T03:15:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca\\\\n2026-01-03T03:15:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca to /host/opt/cni/bin/\\\\n2026-01-03T03:15:01Z [verbose] multus-daemon started\\\\n2026-01-03T03:15:01Z [verbose] Readiness Indicator file check\\\\n2026-01-03T03:15:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.143160 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.157998 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.173745 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.195290 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.214068 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.226065 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.226107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.226118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.226170 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.226183 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:00Z","lastTransitionTime":"2026-01-03T03:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.233122 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.258453 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:33Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0103 03:15:33.460365 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0103 03:15:33.460413 6474 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF0103 03:15:33.460412 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z]\\\\nI0103 03:15:33.460422 6474 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identit\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:59Z\\\",\\\"message\\\":\\\"xternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.194],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0103 03:15:59.825979 6821 ovnkube.go:599] Stopped ovnkube\\\\nI0103 03:15:59.826007 6821 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 03:15:59.826007 6821 lb_config.go:1031] Cluster endpoints for openshift-console/console for network=default are: map[]\\\\nI0103 03:15:59.826073 6821 services_controller.go:443] Built service openshift-console/console LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.194\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0103 03:15:59.826092 6821 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e6002\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.281129 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.297611 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.318370 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.328348 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.328425 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.328448 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.328477 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.328496 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:00Z","lastTransitionTime":"2026-01-03T03:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.431815 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.431857 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.431866 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.431881 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.431893 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:00Z","lastTransitionTime":"2026-01-03T03:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.463979 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:00 crc kubenswrapper[4746]: E0103 03:16:00.464077 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.464246 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:00 crc kubenswrapper[4746]: E0103 03:16:00.464318 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.464430 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:00 crc kubenswrapper[4746]: E0103 03:16:00.464485 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.464698 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:00 crc kubenswrapper[4746]: E0103 03:16:00.464768 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.485312 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.500477 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.517975 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.535730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.535787 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.535805 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.535831 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.535850 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:00Z","lastTransitionTime":"2026-01-03T03:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.537939 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.554117 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc47990-827b-4c2d-be19-ade93a42e533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e68c157a0cda26a4e1ee7910c94e1a7f76477aec7bfd2f0909efac17943dffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.571887 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.586619 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.607160 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.631185 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:47Z\\\",\\\"message\\\":\\\"2026-01-03T03:15:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca\\\\n2026-01-03T03:15:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca to /host/opt/cni/bin/\\\\n2026-01-03T03:15:01Z [verbose] multus-daemon started\\\\n2026-01-03T03:15:01Z [verbose] Readiness Indicator file check\\\\n2026-01-03T03:15:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.640080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.640162 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.640182 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.640216 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.640236 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:00Z","lastTransitionTime":"2026-01-03T03:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.652967 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.666216 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.686683 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.700325 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.713168 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.726761 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.743529 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.743595 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.743609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.743629 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.743642 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:00Z","lastTransitionTime":"2026-01-03T03:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.749783 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:33Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0103 03:15:33.460365 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0103 03:15:33.460413 6474 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF0103 03:15:33.460412 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z]\\\\nI0103 03:15:33.460422 6474 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identit\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:59Z\\\",\\\"message\\\":\\\"xternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.194],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0103 03:15:59.825979 6821 ovnkube.go:599] Stopped ovnkube\\\\nI0103 03:15:59.826007 6821 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 03:15:59.826007 6821 lb_config.go:1031] Cluster endpoints for openshift-console/console for network=default are: map[]\\\\nI0103 03:15:59.826073 6821 services_controller.go:443] Built service openshift-console/console LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.194\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0103 03:15:59.826092 6821 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e6002\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.768706 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.784545 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:00Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.847115 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.847191 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.847232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.847262 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.847283 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:00Z","lastTransitionTime":"2026-01-03T03:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.950576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.950618 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.950640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.950695 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:00 crc kubenswrapper[4746]: I0103 03:16:00.950711 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:00Z","lastTransitionTime":"2026-01-03T03:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.000465 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/3.log" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.053967 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.054020 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.054039 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.054062 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.054081 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.100187 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.100245 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.100262 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.100288 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.100307 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: E0103 03:16:01.118309 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.124413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.124460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.124478 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.124501 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.124521 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: E0103 03:16:01.146773 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.159031 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.159101 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.159125 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.159171 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.159197 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: E0103 03:16:01.183995 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.190730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.190814 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.190841 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.190874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.190898 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: E0103 03:16:01.215415 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.220250 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.220306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.220324 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.220351 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.220369 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: E0103 03:16:01.247637 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:01Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:01 crc kubenswrapper[4746]: E0103 03:16:01.247840 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.250290 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.250350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.250369 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.250393 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.250410 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.353729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.353859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.353925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.353954 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.353972 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.457026 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.457099 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.457123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.457154 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.457177 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.559970 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.560044 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.560068 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.560100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.560124 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.662874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.663261 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.663412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.663580 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.663759 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.766732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.766804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.766844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.766878 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.766900 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.869870 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.869925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.869944 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.869968 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.869991 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.974460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.974566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.974590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.974718 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:01 crc kubenswrapper[4746]: I0103 03:16:01.974745 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:01Z","lastTransitionTime":"2026-01-03T03:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.078364 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.078449 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.078475 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.078506 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.078530 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:02Z","lastTransitionTime":"2026-01-03T03:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.198696 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.198779 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.198813 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.198843 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.198867 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:02Z","lastTransitionTime":"2026-01-03T03:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.302238 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.302559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.302582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.302613 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.302635 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:02Z","lastTransitionTime":"2026-01-03T03:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.406013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.406133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.406202 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.406239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.406261 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:02Z","lastTransitionTime":"2026-01-03T03:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.444772 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.444925 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.444895524 +0000 UTC m=+146.294785869 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.445020 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.445230 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.445302 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.445285634 +0000 UTC m=+146.295175979 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.464804 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.464866 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.464999 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.465050 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.465060 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.465181 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.465366 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.465468 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.509597 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.509640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.509670 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.509689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.509701 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:02Z","lastTransitionTime":"2026-01-03T03:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.546305 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.546373 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.546444 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.546496 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.546635 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.546651 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.546618338 +0000 UTC m=+146.396508683 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.546693 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.546710 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.546727 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.546763 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.546781 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.546743651 +0000 UTC m=+146.396633966 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.546786 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:16:02 crc kubenswrapper[4746]: E0103 03:16:02.546846 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.546828253 +0000 UTC m=+146.396718598 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.611591 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.611640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.611684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.611710 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.611728 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:02Z","lastTransitionTime":"2026-01-03T03:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.715327 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.715599 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.715622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.716582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.716862 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:02Z","lastTransitionTime":"2026-01-03T03:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.820248 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.820302 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.820321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.820343 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.820359 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:02Z","lastTransitionTime":"2026-01-03T03:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.923776 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.923829 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.923845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.923871 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:02 crc kubenswrapper[4746]: I0103 03:16:02.923890 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:02Z","lastTransitionTime":"2026-01-03T03:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.026073 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.026129 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.026137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.026169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.026178 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:03Z","lastTransitionTime":"2026-01-03T03:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.129372 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.129429 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.129441 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.129460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.129477 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:03Z","lastTransitionTime":"2026-01-03T03:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.233450 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.233530 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.233563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.233588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.233607 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:03Z","lastTransitionTime":"2026-01-03T03:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.337009 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.337101 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.337119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.337188 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.337207 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:03Z","lastTransitionTime":"2026-01-03T03:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.440689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.440750 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.440760 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.440792 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.440803 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:03Z","lastTransitionTime":"2026-01-03T03:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.543949 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.544013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.544029 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.544048 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.544060 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:03Z","lastTransitionTime":"2026-01-03T03:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.648201 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.648290 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.648318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.648353 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.648380 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:03Z","lastTransitionTime":"2026-01-03T03:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.753032 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.753087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.753111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.753138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.753158 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:03Z","lastTransitionTime":"2026-01-03T03:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.857203 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.857246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.857256 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.857272 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.857282 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:03Z","lastTransitionTime":"2026-01-03T03:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.960824 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.960908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.960929 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.960965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:03 crc kubenswrapper[4746]: I0103 03:16:03.960989 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:03Z","lastTransitionTime":"2026-01-03T03:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.064686 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.064747 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.064762 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.064788 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.065369 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:04Z","lastTransitionTime":"2026-01-03T03:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.169139 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.169195 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.169214 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.169241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.169261 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:04Z","lastTransitionTime":"2026-01-03T03:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.272116 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.272417 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.272513 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.272620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.272757 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:04Z","lastTransitionTime":"2026-01-03T03:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.375123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.375159 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.375167 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.375211 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.375222 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:04Z","lastTransitionTime":"2026-01-03T03:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.464772 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.464868 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.464970 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.465001 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:04 crc kubenswrapper[4746]: E0103 03:16:04.464984 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:04 crc kubenswrapper[4746]: E0103 03:16:04.465100 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:04 crc kubenswrapper[4746]: E0103 03:16:04.465151 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:04 crc kubenswrapper[4746]: E0103 03:16:04.465199 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.477773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.477794 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.477802 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.477815 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.477825 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:04Z","lastTransitionTime":"2026-01-03T03:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.579630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.579694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.579707 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.579726 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.579737 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:04Z","lastTransitionTime":"2026-01-03T03:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.682731 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.682777 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.682788 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.683100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.683157 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:04Z","lastTransitionTime":"2026-01-03T03:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.786262 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.786320 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.786346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.786368 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.786385 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:04Z","lastTransitionTime":"2026-01-03T03:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.888926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.888987 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.889009 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.889037 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.889060 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:04Z","lastTransitionTime":"2026-01-03T03:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.992358 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.992419 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.992438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.992461 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:04 crc kubenswrapper[4746]: I0103 03:16:04.992478 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:04Z","lastTransitionTime":"2026-01-03T03:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.096200 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.096272 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.096291 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.096318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.096338 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:05Z","lastTransitionTime":"2026-01-03T03:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.199038 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.199138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.199156 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.199183 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.199205 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:05Z","lastTransitionTime":"2026-01-03T03:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.301678 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.301719 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.301729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.301745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.301759 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:05Z","lastTransitionTime":"2026-01-03T03:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.404499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.404582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.404610 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.404636 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.404691 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:05Z","lastTransitionTime":"2026-01-03T03:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.508021 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.508069 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.508080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.508094 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.508105 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:05Z","lastTransitionTime":"2026-01-03T03:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.610498 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.610561 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.610583 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.610612 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.610636 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:05Z","lastTransitionTime":"2026-01-03T03:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.713964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.714028 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.714048 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.714075 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.714096 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:05Z","lastTransitionTime":"2026-01-03T03:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.817035 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.817107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.817135 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.817170 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.817192 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:05Z","lastTransitionTime":"2026-01-03T03:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.919354 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.919430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.919466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.919499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:05 crc kubenswrapper[4746]: I0103 03:16:05.919527 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:05Z","lastTransitionTime":"2026-01-03T03:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.024148 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.024248 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.024274 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.024315 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.024344 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:06Z","lastTransitionTime":"2026-01-03T03:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.127051 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.127129 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.127151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.127182 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.127204 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:06Z","lastTransitionTime":"2026-01-03T03:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.230609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.230708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.230727 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.230756 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.230776 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:06Z","lastTransitionTime":"2026-01-03T03:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.334113 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.334173 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.334193 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.334218 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.334239 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:06Z","lastTransitionTime":"2026-01-03T03:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.436945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.437009 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.437026 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.437053 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.437071 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:06Z","lastTransitionTime":"2026-01-03T03:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.464039 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.464039 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:06 crc kubenswrapper[4746]: E0103 03:16:06.464235 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.464274 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.464054 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:06 crc kubenswrapper[4746]: E0103 03:16:06.464443 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:06 crc kubenswrapper[4746]: E0103 03:16:06.464630 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:06 crc kubenswrapper[4746]: E0103 03:16:06.464796 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.540554 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.540637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.540687 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.540721 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.540743 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:06Z","lastTransitionTime":"2026-01-03T03:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.644065 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.644165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.644200 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.644241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.644264 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:06Z","lastTransitionTime":"2026-01-03T03:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.747801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.747867 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.747896 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.747926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.747951 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:06Z","lastTransitionTime":"2026-01-03T03:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.851456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.851518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.851536 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.851559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.851578 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:06Z","lastTransitionTime":"2026-01-03T03:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.955356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.955416 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.955435 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.955460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:06 crc kubenswrapper[4746]: I0103 03:16:06.955480 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:06Z","lastTransitionTime":"2026-01-03T03:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.057978 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.058043 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.058068 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.058098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.058122 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:07Z","lastTransitionTime":"2026-01-03T03:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.160894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.160984 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.161007 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.161033 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.161053 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:07Z","lastTransitionTime":"2026-01-03T03:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.264314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.264382 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.264403 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.264434 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.264455 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:07Z","lastTransitionTime":"2026-01-03T03:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.367993 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.368099 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.368118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.368694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.368753 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:07Z","lastTransitionTime":"2026-01-03T03:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.472973 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.473055 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.473075 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.473110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.473134 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:07Z","lastTransitionTime":"2026-01-03T03:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.576890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.576976 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.577000 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.577036 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.577059 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:07Z","lastTransitionTime":"2026-01-03T03:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.680529 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.680586 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.680602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.680625 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.680642 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:07Z","lastTransitionTime":"2026-01-03T03:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.783505 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.783585 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.783602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.783627 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.783646 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:07Z","lastTransitionTime":"2026-01-03T03:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.887834 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.887933 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.887968 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.888005 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.888033 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:07Z","lastTransitionTime":"2026-01-03T03:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.991585 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.991725 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.991759 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.991790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:07 crc kubenswrapper[4746]: I0103 03:16:07.991812 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:07Z","lastTransitionTime":"2026-01-03T03:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.095149 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.095203 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.095219 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.095238 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.095251 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:08Z","lastTransitionTime":"2026-01-03T03:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.198550 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.198607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.198620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.198644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.198676 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:08Z","lastTransitionTime":"2026-01-03T03:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.303920 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.304008 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.304036 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.304070 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.304171 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:08Z","lastTransitionTime":"2026-01-03T03:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.408818 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.408904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.408924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.408953 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.408975 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:08Z","lastTransitionTime":"2026-01-03T03:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.464259 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.464401 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.464282 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:08 crc kubenswrapper[4746]: E0103 03:16:08.464520 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.464757 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:08 crc kubenswrapper[4746]: E0103 03:16:08.464762 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:08 crc kubenswrapper[4746]: E0103 03:16:08.464903 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:08 crc kubenswrapper[4746]: E0103 03:16:08.465079 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.512814 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.512895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.512916 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.512947 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.512969 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:08Z","lastTransitionTime":"2026-01-03T03:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.616894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.617372 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.617495 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.617619 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.617784 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:08Z","lastTransitionTime":"2026-01-03T03:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.721367 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.721952 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.722055 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.722164 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.722263 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:08Z","lastTransitionTime":"2026-01-03T03:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.826386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.826753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.826918 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.827061 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.827161 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:08Z","lastTransitionTime":"2026-01-03T03:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.930796 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.930858 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.930875 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.930901 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:08 crc kubenswrapper[4746]: I0103 03:16:08.930921 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:08Z","lastTransitionTime":"2026-01-03T03:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.034871 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.034943 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.034962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.034995 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.035015 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:09Z","lastTransitionTime":"2026-01-03T03:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.138000 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.138065 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.138080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.138110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.138126 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:09Z","lastTransitionTime":"2026-01-03T03:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.241712 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.241784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.241798 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.241823 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.241839 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:09Z","lastTransitionTime":"2026-01-03T03:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.345323 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.345368 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.345380 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.345403 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.345417 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:09Z","lastTransitionTime":"2026-01-03T03:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.448778 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.448864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.448891 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.448929 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.448959 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:09Z","lastTransitionTime":"2026-01-03T03:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.552289 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.552362 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.552386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.552420 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.552446 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:09Z","lastTransitionTime":"2026-01-03T03:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.656746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.656801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.656817 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.656844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.656856 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:09Z","lastTransitionTime":"2026-01-03T03:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.759637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.759763 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.759783 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.759812 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.759835 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:09Z","lastTransitionTime":"2026-01-03T03:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.862908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.862982 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.862999 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.863025 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.863047 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:09Z","lastTransitionTime":"2026-01-03T03:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.966912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.967187 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.967228 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.967264 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:09 crc kubenswrapper[4746]: I0103 03:16:09.967291 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:09Z","lastTransitionTime":"2026-01-03T03:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.069991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.070149 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.070170 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.070199 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.070218 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:10Z","lastTransitionTime":"2026-01-03T03:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.174844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.174947 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.174966 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.174998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.175021 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:10Z","lastTransitionTime":"2026-01-03T03:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.278051 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.278129 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.279621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.279688 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.279713 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:10Z","lastTransitionTime":"2026-01-03T03:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.383945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.384021 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.384038 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.384067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.384087 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:10Z","lastTransitionTime":"2026-01-03T03:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.464647 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:10 crc kubenswrapper[4746]: E0103 03:16:10.464894 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.465209 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.465214 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:10 crc kubenswrapper[4746]: E0103 03:16:10.465393 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:10 crc kubenswrapper[4746]: E0103 03:16:10.465556 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.465692 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:10 crc kubenswrapper[4746]: E0103 03:16:10.465823 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.488268 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.488363 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.488391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.488227 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.488426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.488706 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:10Z","lastTransitionTime":"2026-01-03T03:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.502784 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.519003 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.547298 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0f4e1fd99dc4615625152d95cf93f21fe63b92920f023b1fdb7ea3022ce930\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:33Z\\\",\\\"message\\\":\\\": *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0103 03:15:33.460365 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0103 03:15:33.460413 6474 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF0103 03:15:33.460412 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:15:33Z is after 2025-08-24T17:21:41Z]\\\\nI0103 03:15:33.460422 6474 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identit\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:59Z\\\",\\\"message\\\":\\\"xternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.194],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0103 03:15:59.825979 6821 ovnkube.go:599] Stopped ovnkube\\\\nI0103 03:15:59.826007 6821 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 03:15:59.826007 6821 lb_config.go:1031] Cluster endpoints for openshift-console/console for network=default are: map[]\\\\nI0103 03:15:59.826073 6821 services_controller.go:443] Built service openshift-console/console LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.194\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0103 03:15:59.826092 6821 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e6002\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.583835 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.592105 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.592363 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.592507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.592708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.592884 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:10Z","lastTransitionTime":"2026-01-03T03:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.608073 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.627785 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.642253 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.657842 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.677718 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.695991 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc47990-827b-4c2d-be19-ade93a42e533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e68c157a0cda26a4e1ee7910c94e1a7f76477aec7bfd2f0909efac17943dffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.696865 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.696917 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.696928 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.696946 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.696962 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:10Z","lastTransitionTime":"2026-01-03T03:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.713115 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.731308 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.750500 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.776187 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:47Z\\\",\\\"message\\\":\\\"2026-01-03T03:15:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca\\\\n2026-01-03T03:15:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca to /host/opt/cni/bin/\\\\n2026-01-03T03:15:01Z [verbose] multus-daemon started\\\\n2026-01-03T03:15:01Z [verbose] Readiness Indicator file check\\\\n2026-01-03T03:15:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.801119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.801204 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.801229 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.801257 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.801284 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:10Z","lastTransitionTime":"2026-01-03T03:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.801274 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.825290 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.847850 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:10Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.904767 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.904839 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.904856 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.904887 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:10 crc kubenswrapper[4746]: I0103 03:16:10.904906 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:10Z","lastTransitionTime":"2026-01-03T03:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.008842 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.008923 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.008945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.008975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.008997 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.112694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.112780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.112806 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.112846 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.112874 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.216534 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.216618 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.216643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.216725 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.216760 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.308206 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.308289 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.308313 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.308347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.308371 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: E0103 03:16:11.331511 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.338226 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.338295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.338313 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.338337 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.338353 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: E0103 03:16:11.351259 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.355602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.355640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.355667 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.355689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.355703 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: E0103 03:16:11.373781 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.379383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.379457 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.379476 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.379507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.379530 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: E0103 03:16:11.399266 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.408185 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.408279 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.408308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.408348 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.408390 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: E0103 03:16:11.436069 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:11Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:11 crc kubenswrapper[4746]: E0103 03:16:11.436380 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.439563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.439640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.439684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.439718 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.439741 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.543614 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.543690 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.543709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.543736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.543754 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.647673 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.647748 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.647762 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.647785 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.647824 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.751465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.751536 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.751550 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.751568 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.751579 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.854647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.854766 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.854783 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.854808 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.854850 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.958601 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.958690 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.958709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.958738 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:11 crc kubenswrapper[4746]: I0103 03:16:11.958759 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:11Z","lastTransitionTime":"2026-01-03T03:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.062538 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.062613 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.062634 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.062694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.062717 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:12Z","lastTransitionTime":"2026-01-03T03:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.166353 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.166434 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.166455 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.166487 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.166508 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:12Z","lastTransitionTime":"2026-01-03T03:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.270157 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.270240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.270267 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.270298 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.270325 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:12Z","lastTransitionTime":"2026-01-03T03:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.374275 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.374347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.374360 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.374387 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.374402 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:12Z","lastTransitionTime":"2026-01-03T03:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.464282 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.464315 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.464323 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.464615 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:12 crc kubenswrapper[4746]: E0103 03:16:12.464768 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:12 crc kubenswrapper[4746]: E0103 03:16:12.464950 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:12 crc kubenswrapper[4746]: E0103 03:16:12.465182 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:12 crc kubenswrapper[4746]: E0103 03:16:12.465314 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.477376 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.477471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.477491 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.477518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.477537 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:12Z","lastTransitionTime":"2026-01-03T03:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.580496 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.580550 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.580562 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.580580 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.580592 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:12Z","lastTransitionTime":"2026-01-03T03:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.684165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.684233 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.684251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.684280 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.684302 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:12Z","lastTransitionTime":"2026-01-03T03:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.788414 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.788477 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.788507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.788541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.788565 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:12Z","lastTransitionTime":"2026-01-03T03:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.892341 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.892425 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.892451 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.892484 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.892506 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:12Z","lastTransitionTime":"2026-01-03T03:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.995935 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.996036 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.996065 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.996103 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:12 crc kubenswrapper[4746]: I0103 03:16:12.996128 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:12Z","lastTransitionTime":"2026-01-03T03:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.100190 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.100261 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.100279 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.100309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.100328 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:13Z","lastTransitionTime":"2026-01-03T03:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.202187 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.202216 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.202224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.202236 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.202247 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:13Z","lastTransitionTime":"2026-01-03T03:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.304400 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.304447 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.304456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.304469 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.304477 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:13Z","lastTransitionTime":"2026-01-03T03:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.407621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.407729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.407749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.407778 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.407801 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:13Z","lastTransitionTime":"2026-01-03T03:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.465824 4746 scope.go:117] "RemoveContainer" containerID="73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add" Jan 03 03:16:13 crc kubenswrapper[4746]: E0103 03:16:13.466141 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.482335 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.504530 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.516227 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.516281 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.516297 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.516322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.516340 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:13Z","lastTransitionTime":"2026-01-03T03:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.522928 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.546614 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.582496 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:59Z\\\",\\\"message\\\":\\\"xternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.194],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0103 03:15:59.825979 6821 ovnkube.go:599] Stopped ovnkube\\\\nI0103 03:15:59.826007 6821 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 03:15:59.826007 6821 lb_config.go:1031] Cluster endpoints for openshift-console/console for network=default are: map[]\\\\nI0103 03:15:59.826073 6821 services_controller.go:443] Built service openshift-console/console LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.194\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0103 03:15:59.826092 6821 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e6002\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.597769 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.609200 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.619252 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.619284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.619293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.619309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.619319 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:13Z","lastTransitionTime":"2026-01-03T03:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.620292 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.631396 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.644156 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:47Z\\\",\\\"message\\\":\\\"2026-01-03T03:15:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca\\\\n2026-01-03T03:15:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca to /host/opt/cni/bin/\\\\n2026-01-03T03:15:01Z [verbose] multus-daemon started\\\\n2026-01-03T03:15:01Z [verbose] Readiness Indicator file check\\\\n2026-01-03T03:15:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.657625 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.669017 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc47990-827b-4c2d-be19-ade93a42e533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e68c157a0cda26a4e1ee7910c94e1a7f76477aec7bfd2f0909efac17943dffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.678914 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.691370 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.704166 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.716079 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.722286 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.722319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.722328 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.722344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.722355 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:13Z","lastTransitionTime":"2026-01-03T03:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.728491 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.740803 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:13Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.824992 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.825048 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.825060 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.825074 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.825084 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:13Z","lastTransitionTime":"2026-01-03T03:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.927489 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.927520 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.927529 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.927545 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:13 crc kubenswrapper[4746]: I0103 03:16:13.927554 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:13Z","lastTransitionTime":"2026-01-03T03:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.030256 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.030314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.030337 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.030366 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.030390 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:14Z","lastTransitionTime":"2026-01-03T03:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.133140 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.133198 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.133211 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.133226 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.133238 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:14Z","lastTransitionTime":"2026-01-03T03:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.236050 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.236122 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.236138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.236162 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.236179 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:14Z","lastTransitionTime":"2026-01-03T03:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.339468 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.339538 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.339563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.339593 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.339620 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:14Z","lastTransitionTime":"2026-01-03T03:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.442765 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.442837 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.442859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.442890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.442916 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:14Z","lastTransitionTime":"2026-01-03T03:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.465048 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.465139 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.465215 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.465441 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:14 crc kubenswrapper[4746]: E0103 03:16:14.465441 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:14 crc kubenswrapper[4746]: E0103 03:16:14.465622 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:14 crc kubenswrapper[4746]: E0103 03:16:14.465818 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:14 crc kubenswrapper[4746]: E0103 03:16:14.465941 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.546207 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.546272 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.546293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.546318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.546336 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:14Z","lastTransitionTime":"2026-01-03T03:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.649351 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.649411 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.649428 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.649450 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.649466 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:14Z","lastTransitionTime":"2026-01-03T03:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.752232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.752287 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.752305 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.752328 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.752346 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:14Z","lastTransitionTime":"2026-01-03T03:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.854880 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.854922 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.854933 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.854951 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.854963 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:14Z","lastTransitionTime":"2026-01-03T03:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.957710 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.957779 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.957820 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.957856 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:14 crc kubenswrapper[4746]: I0103 03:16:14.957878 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:14Z","lastTransitionTime":"2026-01-03T03:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.061428 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.061484 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.061504 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.061529 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.061546 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:15Z","lastTransitionTime":"2026-01-03T03:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.165156 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.165209 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.165226 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.165250 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.165268 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:15Z","lastTransitionTime":"2026-01-03T03:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.268220 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.268282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.268303 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.268324 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.268341 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:15Z","lastTransitionTime":"2026-01-03T03:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.371782 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.371852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.371871 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.371894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.371910 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:15Z","lastTransitionTime":"2026-01-03T03:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.475743 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.476372 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.476391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.476690 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.476715 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:15Z","lastTransitionTime":"2026-01-03T03:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.579155 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.579224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.579248 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.579279 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.579297 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:15Z","lastTransitionTime":"2026-01-03T03:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.682243 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.682299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.682317 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.682342 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.682362 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:15Z","lastTransitionTime":"2026-01-03T03:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.786185 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.786239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.786255 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.786277 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.786294 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:15Z","lastTransitionTime":"2026-01-03T03:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.889132 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.889188 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.889199 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.889214 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.889225 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:15Z","lastTransitionTime":"2026-01-03T03:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.992515 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.992617 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.992637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.992729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:15 crc kubenswrapper[4746]: I0103 03:16:15.992756 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:15Z","lastTransitionTime":"2026-01-03T03:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.095205 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.095267 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.095286 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.095313 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.095331 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:16Z","lastTransitionTime":"2026-01-03T03:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.198268 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.198340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.198358 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.198382 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.198401 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:16Z","lastTransitionTime":"2026-01-03T03:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.300919 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.300985 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.301008 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.301037 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.301060 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:16Z","lastTransitionTime":"2026-01-03T03:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.405043 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.405105 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.405124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.405151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.405169 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:16Z","lastTransitionTime":"2026-01-03T03:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.464035 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.464074 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.464305 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:16 crc kubenswrapper[4746]: E0103 03:16:16.464602 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:16 crc kubenswrapper[4746]: E0103 03:16:16.464958 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.464982 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:16 crc kubenswrapper[4746]: E0103 03:16:16.465042 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:16 crc kubenswrapper[4746]: E0103 03:16:16.465167 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.509269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.509360 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.509384 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.509422 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.509443 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:16Z","lastTransitionTime":"2026-01-03T03:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.611672 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.611701 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.611709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.611721 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.611729 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:16Z","lastTransitionTime":"2026-01-03T03:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.715053 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.715130 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.715151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.715175 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.715191 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:16Z","lastTransitionTime":"2026-01-03T03:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.817506 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.817562 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.817582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.817609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.817627 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:16Z","lastTransitionTime":"2026-01-03T03:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.919856 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.919923 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.919949 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.919976 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:16 crc kubenswrapper[4746]: I0103 03:16:16.919997 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:16Z","lastTransitionTime":"2026-01-03T03:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.013355 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:17 crc kubenswrapper[4746]: E0103 03:16:17.013512 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:16:17 crc kubenswrapper[4746]: E0103 03:16:17.013605 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs podName:28a574f3-8744-4d57-aada-e4b328244e19 nodeName:}" failed. No retries permitted until 2026-01-03 03:17:21.013580689 +0000 UTC m=+160.863471024 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs") pod "network-metrics-daemon-57tv2" (UID: "28a574f3-8744-4d57-aada-e4b328244e19") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.023919 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.023994 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.024009 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.024055 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.024071 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:17Z","lastTransitionTime":"2026-01-03T03:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.127169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.127226 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.127243 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.127265 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.127281 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:17Z","lastTransitionTime":"2026-01-03T03:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.230245 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.230299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.230320 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.230342 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.230359 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:17Z","lastTransitionTime":"2026-01-03T03:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.333979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.334043 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.334061 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.334084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.334101 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:17Z","lastTransitionTime":"2026-01-03T03:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.437320 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.437384 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.437406 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.437435 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.437453 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:17Z","lastTransitionTime":"2026-01-03T03:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.539343 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.539418 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.539441 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.539467 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.539485 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:17Z","lastTransitionTime":"2026-01-03T03:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.642938 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.642977 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.642989 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.643004 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.643016 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:17Z","lastTransitionTime":"2026-01-03T03:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.746020 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.746072 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.746083 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.746099 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.746110 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:17Z","lastTransitionTime":"2026-01-03T03:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.848419 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.848461 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.848470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.848484 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.848494 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:17Z","lastTransitionTime":"2026-01-03T03:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.951695 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.951786 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.951804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.951829 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:17 crc kubenswrapper[4746]: I0103 03:16:17.951845 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:17Z","lastTransitionTime":"2026-01-03T03:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.054580 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.054623 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.054633 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.054650 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.054685 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:18Z","lastTransitionTime":"2026-01-03T03:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.157119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.157214 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.157257 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.157278 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.157292 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:18Z","lastTransitionTime":"2026-01-03T03:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.260751 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.260849 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.260865 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.260889 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.260907 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:18Z","lastTransitionTime":"2026-01-03T03:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.364331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.364372 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.364382 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.364397 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.364406 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:18Z","lastTransitionTime":"2026-01-03T03:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.464462 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.464575 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.464599 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:18 crc kubenswrapper[4746]: E0103 03:16:18.464808 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.464907 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:18 crc kubenswrapper[4746]: E0103 03:16:18.465046 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:18 crc kubenswrapper[4746]: E0103 03:16:18.465171 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:18 crc kubenswrapper[4746]: E0103 03:16:18.465436 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.466577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.466630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.466647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.466707 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.466733 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:18Z","lastTransitionTime":"2026-01-03T03:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.569648 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.569749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.569767 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.569790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.569808 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:18Z","lastTransitionTime":"2026-01-03T03:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.673095 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.673153 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.673179 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.673207 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.673227 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:18Z","lastTransitionTime":"2026-01-03T03:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.776041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.776091 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.776103 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.776118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.776128 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:18Z","lastTransitionTime":"2026-01-03T03:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.878717 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.878790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.878812 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.878836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.878854 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:18Z","lastTransitionTime":"2026-01-03T03:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.981849 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.981940 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.981962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.981990 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:18 crc kubenswrapper[4746]: I0103 03:16:18.982011 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:18Z","lastTransitionTime":"2026-01-03T03:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.084527 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.084585 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.084599 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.084616 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.084681 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:19Z","lastTransitionTime":"2026-01-03T03:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.187564 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.187613 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.187628 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.187647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.187677 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:19Z","lastTransitionTime":"2026-01-03T03:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.291202 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.291260 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.291313 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.291337 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.291354 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:19Z","lastTransitionTime":"2026-01-03T03:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.394054 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.394117 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.394134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.394159 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.394177 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:19Z","lastTransitionTime":"2026-01-03T03:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.485112 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.497039 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.497085 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.497104 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.497128 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.497145 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:19Z","lastTransitionTime":"2026-01-03T03:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.599943 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.599998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.600014 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.600036 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.600053 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:19Z","lastTransitionTime":"2026-01-03T03:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.702873 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.702931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.702948 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.702974 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.702990 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:19Z","lastTransitionTime":"2026-01-03T03:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.806320 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.806404 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.806424 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.806447 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.806463 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:19Z","lastTransitionTime":"2026-01-03T03:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.910530 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.910594 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.910611 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.910638 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:19 crc kubenswrapper[4746]: I0103 03:16:19.910690 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:19Z","lastTransitionTime":"2026-01-03T03:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.013962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.014029 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.014048 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.014074 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.014095 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:20Z","lastTransitionTime":"2026-01-03T03:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.117169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.117233 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.117251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.117281 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.117299 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:20Z","lastTransitionTime":"2026-01-03T03:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.220511 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.220562 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.220578 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.220605 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.220626 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:20Z","lastTransitionTime":"2026-01-03T03:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.323209 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.323295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.323311 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.323334 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.323354 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:20Z","lastTransitionTime":"2026-01-03T03:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.426013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.426081 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.426105 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.426130 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.426148 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:20Z","lastTransitionTime":"2026-01-03T03:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.464713 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.464802 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:20 crc kubenswrapper[4746]: E0103 03:16:20.464921 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.464729 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.464973 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:20 crc kubenswrapper[4746]: E0103 03:16:20.465048 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:20 crc kubenswrapper[4746]: E0103 03:16:20.465256 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:20 crc kubenswrapper[4746]: E0103 03:16:20.465356 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.484833 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7142ec38-9a97-44ed-81f6-9771ec5f9aec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6485b6bcfcac57b1efc93706a32903224074d350aeffa02e2bf8dff7e884960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39c7d27f45a49844b51c4529178e7fb2e6edacd1d0edc9000e8ef6950fbdb2a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03772a43cbf7c347815b82dea5e3e725186fd97c66249994c0aaaee95bb55b9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.501792 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc47990-827b-4c2d-be19-ade93a42e533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e68c157a0cda26a4e1ee7910c94e1a7f76477aec7bfd2f0909efac17943dffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2245293aab19588f66fa7f4b671038309c7e31e4523e2565179bd1ebd99a38a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.516941 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hm664" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1722955c-53eb-4bf4-91dc-d3478c190baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef43878c93da07c54b007c0ba8658a707fc8ea852970e92624c3144cd79d1f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fllc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hm664\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.529922 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.530002 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.530027 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.530057 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.530079 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:20Z","lastTransitionTime":"2026-01-03T03:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.536147 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c192a654ef2293450aa10823aea84599fed416a894e3bbcbf355d5fecf52b3f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.554803 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b3b853-9953-4039-964d-841a01708848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eba3c94a7341198cfee4222d42f93c36fbf9fc53564e9784cba039daa5aa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bfmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8lt5d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.576326 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-plg55" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7938adea-5f3a-4bfa-8776-f8b06ce7219e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:47Z\\\",\\\"message\\\":\\\"2026-01-03T03:15:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca\\\\n2026-01-03T03:15:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b9bbae53-f83e-4e7e-aacf-12a238f32aca to /host/opt/cni/bin/\\\\n2026-01-03T03:15:01Z [verbose] multus-daemon started\\\\n2026-01-03T03:15:01Z [verbose] Readiness Indicator file check\\\\n2026-01-03T03:15:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-595s4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-plg55\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.592410 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ab81ded7aba746a91c65b1a38f230b4e731d5fb8321f96700d748e18cbe457c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.610258 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.626444 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.632521 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.632562 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.632580 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.632603 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.632621 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:20Z","lastTransitionTime":"2026-01-03T03:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.642622 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cb36226-f723-4cc8-b765-07aaa195cd44\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW0103 03:14:57.997354 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0103 03:14:57.997484 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0103 03:14:57.998431 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1762983657/tls.crt::/tmp/serving-cert-1762983657/tls.key\\\\\\\"\\\\nI0103 03:14:58.235379 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0103 03:14:58.238752 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0103 03:14:58.238776 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0103 03:14:58.238817 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0103 03:14:58.238823 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0103 03:14:58.245098 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0103 03:14:58.245136 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245142 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0103 03:14:58.245149 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0103 03:14:58.245154 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0103 03:14:58.245158 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0103 03:14:58.245162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0103 03:14:58.245406 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0103 03:14:58.247766 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.655496 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b9ec2e4-c510-44e8-97b6-11718f5408a6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b11a17616332639604d1ff10668fa24a11ce229e989f59649e1bea6f4024d06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a11ee58c382468d8e8914d2dc0eff9efe32830561435c5ca6d683361c801d7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9305a3cb80557564982ba05f0a1edcdff8a524241e1a2c2a6f93e9637b91cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e037c1ab71864f7af3b0187cf848a6f393b879bd5d6e7822cdf22bcd83f0b9e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.670432 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.692529 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-03T03:15:59Z\\\",\\\"message\\\":\\\"xternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.194],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0103 03:15:59.825979 6821 ovnkube.go:599] Stopped ovnkube\\\\nI0103 03:15:59.826007 6821 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0103 03:15:59.826007 6821 lb_config.go:1031] Cluster endpoints for openshift-console/console for network=default are: map[]\\\\nI0103 03:15:59.826073 6821 services_controller.go:443] Built service openshift-console/console LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.194\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF0103 03:15:59.826092 6821 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e6002\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhbjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-rzrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.708975 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gnct7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"784eb651-1784-4e2a-b0ca-34163f44525c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a19b6d8aef4e6e7428ee7a6bc79e5943ea69fd4d6b8479b6136c581e3ba88c8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42674aa010ac70cd4636630764cef65fb37af874d71ff1803113134bb7ca6e25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02dd03c61c6d9bbcccdf3f0a8fdb8fe7b89530a8cfde4184ab9524c451620fd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6bbfbe177f33364f5998428cdcb144dfe2e1b8dc9e4ab7d9fdc55379fc1e0ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1ffe184d626f8a4aec4560c9cacfc8343a4f700b80fd2cb5f575fa9ec3d6df3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4488a593ebb771ef06bcb5663f697dfd6c62229f9720aa9c9578961ce2e1de36\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23d3ce51f7d32afb09c64a917020d46675e31f29ce411d29ef00b56990e5c33e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:15:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87f6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gnct7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.719910 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.721130 4746 scope.go:117] "RemoveContainer" containerID="73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add" Jan 03 03:16:20 crc kubenswrapper[4746]: E0103 03:16:20.721364 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.721716 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0be8c1d3-1da1-4359-a875-be014834495c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cf46e4193fa93c4601ffd1bd3c7bef5b852e22827a91097f04bf0491865d12f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a7ab9c6c46b63d4eb710885c407bd2dd23e0725fa6994ad1abb09cf66e728e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57qzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hwmmc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.735466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.735612 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.735721 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.735835 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.735925 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:20Z","lastTransitionTime":"2026-01-03T03:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.739910 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e86cf2c2-622e-4560-afea-28ffb9e7ca9c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c58f3915b8ba21a2f08cc0e9923e92178dea4792988545ea876da5e3e5e788f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f54b5dfd9a52208ea24331b57b4933d79084d651a4bb0e802acb8896336987\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21f25c657a9225e17a2078652eb3a65451f0e8dc69ba0f8149f361ad5ecb34c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470ef9fc25b7b74267ded985b5f2714bfa12dfb3acd4762f5722753c2b998592\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://54e8c9d5db16a894fac0ee567110601d0a1c892577c765800214e462c077e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b6e677b77a82f6ad6da9e80f8c812caea38bf5c95ad75a72051f529b55d3ddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b6e677b77a82f6ad6da9e80f8c812caea38bf5c95ad75a72051f529b55d3ddd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fd0d612e9b8971c266bdb5f1cbb79e63e81cbf60caa0064be15e662dc64c2a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7fd0d612e9b8971c266bdb5f1cbb79e63e81cbf60caa0064be15e662dc64c2a3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:41Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5a7150c73bd47a05320dfd3b5526df6d6a2990c80d90fd5b9441969bc4a2e509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a7150c73bd47a05320dfd3b5526df6d6a2990c80d90fd5b9441969bc4a2e509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-03T03:14:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-03T03:14:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:14:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.752695 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-03T03:14:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71f27966ffc0107fa637df9da014c0ab8daeca6c957ab43cf1ff4de33425bdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa872e33a485d98a8b02b81ad29b89c2f7c929f61e04e2cbf2539c3d2256db2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.763951 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tzqwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91d74e64-7231-46aa-9cef-cb0212ef6396\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988ce65dbc3760c69955383e78de0bcb35bec6a3eedea0bad8cc0e55031cd91f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-03T03:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6b5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tzqwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.775000 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-57tv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28a574f3-8744-4d57-aada-e4b328244e19\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-03T03:15:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clfq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-03T03:15:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-57tv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:20Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.839599 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.839702 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.839722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.839751 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.839774 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:20Z","lastTransitionTime":"2026-01-03T03:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.942014 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.942118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.942158 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.942222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:20 crc kubenswrapper[4746]: I0103 03:16:20.942240 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:20Z","lastTransitionTime":"2026-01-03T03:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.045844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.046421 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.046468 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.046495 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.046513 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.148821 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.148886 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.148908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.148935 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.148954 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.251507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.251815 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.251931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.252060 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.252157 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.354516 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.354612 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.354630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.354683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.354701 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.457424 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.457480 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.457493 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.457509 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.457522 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.559630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.559728 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.559748 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.559772 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.559791 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.663725 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.663780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.663797 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.663821 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.663838 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.714852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.714950 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.714968 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.714991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.715010 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: E0103 03:16:21.734515 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:21Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.739909 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.739957 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.739974 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.739997 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.740014 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: E0103 03:16:21.759848 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:21Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.764405 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.764466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.764474 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.764490 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.764499 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: E0103 03:16:21.778961 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:21Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.784309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.784379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.784396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.784421 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.784438 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: E0103 03:16:21.806900 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:21Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.811061 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.811095 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.811105 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.811118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.811131 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: E0103 03:16:21.831323 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-03T03:16:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6aefa87f-1f87-4c4a-a02a-a9b058286472\\\",\\\"systemUUID\\\":\\\"e0c9d956-6366-4423-bba4-4b3a38c60b92\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-03T03:16:21Z is after 2025-08-24T17:21:41Z" Jan 03 03:16:21 crc kubenswrapper[4746]: E0103 03:16:21.831447 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.833085 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.833134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.833143 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.833159 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.833170 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.936126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.936182 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.936193 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.936213 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:21 crc kubenswrapper[4746]: I0103 03:16:21.936225 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:21Z","lastTransitionTime":"2026-01-03T03:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.038984 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.039055 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.039075 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.039104 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.039128 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:22Z","lastTransitionTime":"2026-01-03T03:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.141733 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.141788 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.141799 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.141816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.141829 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:22Z","lastTransitionTime":"2026-01-03T03:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.245796 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.245868 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.245888 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.245923 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.245942 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:22Z","lastTransitionTime":"2026-01-03T03:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.349855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.349913 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.349931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.349958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.349976 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:22Z","lastTransitionTime":"2026-01-03T03:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.452371 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.452416 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.452427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.452446 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.452458 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:22Z","lastTransitionTime":"2026-01-03T03:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.464497 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.464505 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:22 crc kubenswrapper[4746]: E0103 03:16:22.464684 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.464760 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.464887 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:22 crc kubenswrapper[4746]: E0103 03:16:22.465028 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:22 crc kubenswrapper[4746]: E0103 03:16:22.465331 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:22 crc kubenswrapper[4746]: E0103 03:16:22.465491 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.555247 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.555301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.555319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.555345 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.555364 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:22Z","lastTransitionTime":"2026-01-03T03:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.657450 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.657563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.657588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.657628 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.657702 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:22Z","lastTransitionTime":"2026-01-03T03:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.761145 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.761224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.761251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.761283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.761307 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:22Z","lastTransitionTime":"2026-01-03T03:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.864411 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.864474 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.864494 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.864521 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.864541 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:22Z","lastTransitionTime":"2026-01-03T03:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.967699 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.967742 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.967754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.967773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:22 crc kubenswrapper[4746]: I0103 03:16:22.967786 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:22Z","lastTransitionTime":"2026-01-03T03:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.070492 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.070546 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.070564 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.070588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.070611 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:23Z","lastTransitionTime":"2026-01-03T03:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.173643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.173688 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.173697 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.173709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.173718 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:23Z","lastTransitionTime":"2026-01-03T03:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.276230 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.276320 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.276339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.276378 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.276401 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:23Z","lastTransitionTime":"2026-01-03T03:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.380547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.380589 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.380598 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.380619 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.380638 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:23Z","lastTransitionTime":"2026-01-03T03:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.483201 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.483256 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.483269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.483288 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.483301 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:23Z","lastTransitionTime":"2026-01-03T03:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.587178 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.587231 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.587240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.587257 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.587267 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:23Z","lastTransitionTime":"2026-01-03T03:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.690087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.690763 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.690863 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.690938 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.690996 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:23Z","lastTransitionTime":"2026-01-03T03:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.794081 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.794567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.794829 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.795054 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.795265 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:23Z","lastTransitionTime":"2026-01-03T03:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.898148 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.898211 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.898228 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.898254 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:23 crc kubenswrapper[4746]: I0103 03:16:23.898273 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:23Z","lastTransitionTime":"2026-01-03T03:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.001466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.001860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.001955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.002024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.002091 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:24Z","lastTransitionTime":"2026-01-03T03:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.105015 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.105067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.105079 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.105098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.105111 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:24Z","lastTransitionTime":"2026-01-03T03:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.208310 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.208371 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.208390 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.208419 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.208440 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:24Z","lastTransitionTime":"2026-01-03T03:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.312038 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.312341 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.312493 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.312601 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.312699 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:24Z","lastTransitionTime":"2026-01-03T03:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.415153 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.415218 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.415237 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.415266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.415284 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:24Z","lastTransitionTime":"2026-01-03T03:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.465049 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.465201 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.465288 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:24 crc kubenswrapper[4746]: E0103 03:16:24.465491 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:24 crc kubenswrapper[4746]: E0103 03:16:24.465639 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.465693 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:24 crc kubenswrapper[4746]: E0103 03:16:24.465916 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:24 crc kubenswrapper[4746]: E0103 03:16:24.466092 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.518429 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.518513 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.518534 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.518566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.518591 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:24Z","lastTransitionTime":"2026-01-03T03:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.620702 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.620761 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.620779 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.620803 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.620822 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:24Z","lastTransitionTime":"2026-01-03T03:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.724578 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.724638 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.724649 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.724688 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.724700 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:24Z","lastTransitionTime":"2026-01-03T03:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.827771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.827818 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.827829 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.827849 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.827861 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:24Z","lastTransitionTime":"2026-01-03T03:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.930531 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.930580 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.930590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.930609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:24 crc kubenswrapper[4746]: I0103 03:16:24.930620 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:24Z","lastTransitionTime":"2026-01-03T03:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.033767 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.033840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.033855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.033877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.033897 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:25Z","lastTransitionTime":"2026-01-03T03:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.136038 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.136089 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.136098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.136113 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.136123 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:25Z","lastTransitionTime":"2026-01-03T03:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.239224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.239289 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.239301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.239317 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.239328 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:25Z","lastTransitionTime":"2026-01-03T03:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.342632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.342692 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.342702 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.342720 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.342732 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:25Z","lastTransitionTime":"2026-01-03T03:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.446119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.446165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.446177 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.446194 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.446206 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:25Z","lastTransitionTime":"2026-01-03T03:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.549216 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.549256 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.549266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.549283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.549294 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:25Z","lastTransitionTime":"2026-01-03T03:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.651975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.652043 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.652059 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.652107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.652124 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:25Z","lastTransitionTime":"2026-01-03T03:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.761144 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.761222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.761267 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.761303 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.761328 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:25Z","lastTransitionTime":"2026-01-03T03:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.864367 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.864426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.864441 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.864465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.864483 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:25Z","lastTransitionTime":"2026-01-03T03:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.967877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.967945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.967962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.967990 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:25 crc kubenswrapper[4746]: I0103 03:16:25.968007 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:25Z","lastTransitionTime":"2026-01-03T03:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.070767 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.070841 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.070863 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.070892 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.070914 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:26Z","lastTransitionTime":"2026-01-03T03:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.173964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.174013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.174030 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.174051 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.174069 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:26Z","lastTransitionTime":"2026-01-03T03:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.277550 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.277626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.277644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.277701 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.277724 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:26Z","lastTransitionTime":"2026-01-03T03:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.381271 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.381328 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.381347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.381365 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.381376 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:26Z","lastTransitionTime":"2026-01-03T03:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.464847 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.464925 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:26 crc kubenswrapper[4746]: E0103 03:16:26.465014 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.464933 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:26 crc kubenswrapper[4746]: E0103 03:16:26.465118 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.465154 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:26 crc kubenswrapper[4746]: E0103 03:16:26.465248 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:26 crc kubenswrapper[4746]: E0103 03:16:26.465292 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.484064 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.484350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.484365 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.484383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.484395 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:26Z","lastTransitionTime":"2026-01-03T03:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.587141 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.587205 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.587223 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.587245 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.587263 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:26Z","lastTransitionTime":"2026-01-03T03:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.690222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.690299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.690324 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.690352 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.690370 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:26Z","lastTransitionTime":"2026-01-03T03:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.793569 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.793689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.793711 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.793736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.793752 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:26Z","lastTransitionTime":"2026-01-03T03:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.897338 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.897391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.897400 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.897418 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:26 crc kubenswrapper[4746]: I0103 03:16:26.897429 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:26Z","lastTransitionTime":"2026-01-03T03:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.001053 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.001132 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.001145 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.001171 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.001186 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:27Z","lastTransitionTime":"2026-01-03T03:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.103572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.103605 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.103614 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.103626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.103634 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:27Z","lastTransitionTime":"2026-01-03T03:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.207041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.207084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.207093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.207112 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.207130 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:27Z","lastTransitionTime":"2026-01-03T03:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.310397 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.310468 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.310489 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.310518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.310540 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:27Z","lastTransitionTime":"2026-01-03T03:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.414244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.414358 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.414380 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.414407 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.414429 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:27Z","lastTransitionTime":"2026-01-03T03:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.517565 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.517754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.517779 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.517806 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.517825 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:27Z","lastTransitionTime":"2026-01-03T03:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.621072 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.621151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.621176 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.621207 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.621230 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:27Z","lastTransitionTime":"2026-01-03T03:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.724646 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.724755 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.724780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.724811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.724832 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:27Z","lastTransitionTime":"2026-01-03T03:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.827771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.827848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.827866 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.827891 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.827908 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:27Z","lastTransitionTime":"2026-01-03T03:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.931110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.931169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.931181 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.931201 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:27 crc kubenswrapper[4746]: I0103 03:16:27.931215 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:27Z","lastTransitionTime":"2026-01-03T03:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.034277 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.034344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.034366 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.034399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.034422 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:28Z","lastTransitionTime":"2026-01-03T03:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.137542 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.137590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.137602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.137621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.137635 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:28Z","lastTransitionTime":"2026-01-03T03:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.241046 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.241088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.241098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.241114 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.241123 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:28Z","lastTransitionTime":"2026-01-03T03:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.343522 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.343609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.343639 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.343711 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.343737 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:28Z","lastTransitionTime":"2026-01-03T03:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.446988 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.447069 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.447091 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.447116 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.447137 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:28Z","lastTransitionTime":"2026-01-03T03:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.464848 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.464854 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.464926 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.465110 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:28 crc kubenswrapper[4746]: E0103 03:16:28.465305 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:28 crc kubenswrapper[4746]: E0103 03:16:28.465378 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:28 crc kubenswrapper[4746]: E0103 03:16:28.465434 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:28 crc kubenswrapper[4746]: E0103 03:16:28.466470 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.550096 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.550155 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.550168 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.550193 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.550207 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:28Z","lastTransitionTime":"2026-01-03T03:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.653810 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.653884 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.653902 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.653932 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.653955 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:28Z","lastTransitionTime":"2026-01-03T03:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.756998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.757054 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.757064 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.757082 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.757094 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:28Z","lastTransitionTime":"2026-01-03T03:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.859553 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.859600 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.859649 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.859718 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.859738 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:28Z","lastTransitionTime":"2026-01-03T03:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.961762 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.961820 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.961833 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.961851 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:28 crc kubenswrapper[4746]: I0103 03:16:28.961866 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:28Z","lastTransitionTime":"2026-01-03T03:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.065006 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.065056 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.065067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.065083 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.065092 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:29Z","lastTransitionTime":"2026-01-03T03:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.168342 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.168383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.168393 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.168406 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.168416 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:29Z","lastTransitionTime":"2026-01-03T03:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.271419 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.271488 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.271500 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.271514 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.271523 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:29Z","lastTransitionTime":"2026-01-03T03:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.374306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.374370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.374392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.374421 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.374447 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:29Z","lastTransitionTime":"2026-01-03T03:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.477152 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.477193 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.477202 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.477217 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.477226 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:29Z","lastTransitionTime":"2026-01-03T03:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.579874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.579932 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.579949 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.579966 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.579977 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:29Z","lastTransitionTime":"2026-01-03T03:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.682265 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.682301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.682309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.682323 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.682331 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:29Z","lastTransitionTime":"2026-01-03T03:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.784574 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.784822 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.784838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.784926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.784942 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:29Z","lastTransitionTime":"2026-01-03T03:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.887181 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.887297 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.887309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.887327 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.887337 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:29Z","lastTransitionTime":"2026-01-03T03:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.990172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.990248 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.990261 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.990277 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:29 crc kubenswrapper[4746]: I0103 03:16:29.990288 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:29Z","lastTransitionTime":"2026-01-03T03:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.093083 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.093123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.093136 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.093151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.093163 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:30Z","lastTransitionTime":"2026-01-03T03:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.196466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.196503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.196514 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.196528 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.196538 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:30Z","lastTransitionTime":"2026-01-03T03:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.299174 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.299217 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.299229 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.299247 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.299259 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:30Z","lastTransitionTime":"2026-01-03T03:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.401321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.401372 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.401388 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.401409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.401425 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:30Z","lastTransitionTime":"2026-01-03T03:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.464421 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.464484 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.464553 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:30 crc kubenswrapper[4746]: E0103 03:16:30.464551 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.464695 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:30 crc kubenswrapper[4746]: E0103 03:16:30.464782 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:30 crc kubenswrapper[4746]: E0103 03:16:30.464874 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:30 crc kubenswrapper[4746]: E0103 03:16:30.465372 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.504789 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.504837 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.504854 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.504877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.504894 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:30Z","lastTransitionTime":"2026-01-03T03:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.592360 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gnct7" podStartSLOduration=91.592339542 podStartE2EDuration="1m31.592339542s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:30.576688848 +0000 UTC m=+110.426579183" watchObservedRunningTime="2026-01-03 03:16:30.592339542 +0000 UTC m=+110.442229847" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.592605 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hwmmc" podStartSLOduration=91.592601098 podStartE2EDuration="1m31.592601098s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:30.592109077 +0000 UTC m=+110.441999382" watchObservedRunningTime="2026-01-03 03:16:30.592601098 +0000 UTC m=+110.442491403" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.608084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.608116 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.608125 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.608140 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.608152 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:30Z","lastTransitionTime":"2026-01-03T03:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.622560 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.622537755 podStartE2EDuration="1m1.622537755s" podCreationTimestamp="2026-01-03 03:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:30.621359376 +0000 UTC m=+110.471249691" watchObservedRunningTime="2026-01-03 03:16:30.622537755 +0000 UTC m=+110.472428080" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.622786 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=92.62278101 podStartE2EDuration="1m32.62278101s" podCreationTimestamp="2026-01-03 03:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:30.610284461 +0000 UTC m=+110.460174816" watchObservedRunningTime="2026-01-03 03:16:30.62278101 +0000 UTC m=+110.472671325" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.682380 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=11.682360276 podStartE2EDuration="11.682360276s" podCreationTimestamp="2026-01-03 03:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:30.681907505 +0000 UTC m=+110.531797840" watchObservedRunningTime="2026-01-03 03:16:30.682360276 +0000 UTC m=+110.532250601" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.706201 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tzqwd" podStartSLOduration=91.706180886 podStartE2EDuration="1m31.706180886s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:30.705383337 +0000 UTC m=+110.555273682" watchObservedRunningTime="2026-01-03 03:16:30.706180886 +0000 UTC m=+110.556071201" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.710177 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.710433 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.710625 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.710899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.711107 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:30Z","lastTransitionTime":"2026-01-03T03:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.751092 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podStartSLOduration=91.75107064 podStartE2EDuration="1m31.75107064s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:30.733202793 +0000 UTC m=+110.583093118" watchObservedRunningTime="2026-01-03 03:16:30.75107064 +0000 UTC m=+110.600960965" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.751414 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-plg55" podStartSLOduration=91.751405158 podStartE2EDuration="1m31.751405158s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:30.750489366 +0000 UTC m=+110.600379681" watchObservedRunningTime="2026-01-03 03:16:30.751405158 +0000 UTC m=+110.601295473" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.763503 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.763478237 podStartE2EDuration="1m28.763478237s" podCreationTimestamp="2026-01-03 03:15:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:30.763313773 +0000 UTC m=+110.613204138" watchObservedRunningTime="2026-01-03 03:16:30.763478237 +0000 UTC m=+110.613368562" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.780646 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=45.780613957 podStartE2EDuration="45.780613957s" podCreationTimestamp="2026-01-03 03:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:30.773359174 +0000 UTC m=+110.623249489" watchObservedRunningTime="2026-01-03 03:16:30.780613957 +0000 UTC m=+110.630504312" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.802493 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hm664" podStartSLOduration=92.8024712 podStartE2EDuration="1m32.8024712s" podCreationTimestamp="2026-01-03 03:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:30.801255061 +0000 UTC m=+110.651145376" watchObservedRunningTime="2026-01-03 03:16:30.8024712 +0000 UTC m=+110.652361525" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.814152 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.814208 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.814217 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.814246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.814256 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:30Z","lastTransitionTime":"2026-01-03T03:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.916607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.916643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.916668 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.916685 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:30 crc kubenswrapper[4746]: I0103 03:16:30.916696 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:30Z","lastTransitionTime":"2026-01-03T03:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.020592 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.020622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.020630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.020644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.020665 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:31Z","lastTransitionTime":"2026-01-03T03:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.123067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.123104 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.123113 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.123127 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.123150 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:31Z","lastTransitionTime":"2026-01-03T03:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.225496 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.225540 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.225549 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.225564 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.225574 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:31Z","lastTransitionTime":"2026-01-03T03:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.329263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.329299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.329309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.329324 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.329333 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:31Z","lastTransitionTime":"2026-01-03T03:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.433270 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.433341 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.433366 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.433399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.433421 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:31Z","lastTransitionTime":"2026-01-03T03:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.536897 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.536971 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.536995 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.537020 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.537037 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:31Z","lastTransitionTime":"2026-01-03T03:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.639384 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.639437 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.639456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.639485 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.639507 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:31Z","lastTransitionTime":"2026-01-03T03:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.741954 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.742000 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.742012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.742024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.742032 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:31Z","lastTransitionTime":"2026-01-03T03:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.844012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.844073 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.844096 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.844126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.844148 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:31Z","lastTransitionTime":"2026-01-03T03:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.946456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.946507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.946524 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.946570 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.946588 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:31Z","lastTransitionTime":"2026-01-03T03:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.948395 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.948435 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.948451 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.948470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 03 03:16:31 crc kubenswrapper[4746]: I0103 03:16:31.948485 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-03T03:16:31Z","lastTransitionTime":"2026-01-03T03:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.013360 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9"] Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.013978 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.018473 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.019143 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.019401 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.019604 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.080585 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9887b8e9-c459-4d60-833d-f6dd645e878a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.080633 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9887b8e9-c459-4d60-833d-f6dd645e878a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.080651 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9887b8e9-c459-4d60-833d-f6dd645e878a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.080722 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9887b8e9-c459-4d60-833d-f6dd645e878a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.080744 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9887b8e9-c459-4d60-833d-f6dd645e878a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.182091 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9887b8e9-c459-4d60-833d-f6dd645e878a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.182160 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9887b8e9-c459-4d60-833d-f6dd645e878a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.182228 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9887b8e9-c459-4d60-833d-f6dd645e878a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.182340 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9887b8e9-c459-4d60-833d-f6dd645e878a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.182375 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9887b8e9-c459-4d60-833d-f6dd645e878a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.182369 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9887b8e9-c459-4d60-833d-f6dd645e878a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.182476 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9887b8e9-c459-4d60-833d-f6dd645e878a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.183779 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9887b8e9-c459-4d60-833d-f6dd645e878a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.192745 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9887b8e9-c459-4d60-833d-f6dd645e878a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.205587 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9887b8e9-c459-4d60-833d-f6dd645e878a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7lbv9\" (UID: \"9887b8e9-c459-4d60-833d-f6dd645e878a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.338477 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" Jan 03 03:16:32 crc kubenswrapper[4746]: W0103 03:16:32.363071 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9887b8e9_c459_4d60_833d_f6dd645e878a.slice/crio-53d63a8c86404711c25ed113d72c1650395626e8b83cd22483833992af395c4a WatchSource:0}: Error finding container 53d63a8c86404711c25ed113d72c1650395626e8b83cd22483833992af395c4a: Status 404 returned error can't find the container with id 53d63a8c86404711c25ed113d72c1650395626e8b83cd22483833992af395c4a Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.463931 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.464007 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.464013 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:32 crc kubenswrapper[4746]: E0103 03:16:32.464101 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.464147 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:32 crc kubenswrapper[4746]: E0103 03:16:32.464260 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:32 crc kubenswrapper[4746]: E0103 03:16:32.464366 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:32 crc kubenswrapper[4746]: E0103 03:16:32.465122 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:32 crc kubenswrapper[4746]: I0103 03:16:32.465604 4746 scope.go:117] "RemoveContainer" containerID="73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add" Jan 03 03:16:32 crc kubenswrapper[4746]: E0103 03:16:32.465921 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-rzrbx_openshift-ovn-kubernetes(a9a29410-e9d4-4c5a-98cb-e2c56b9170ff)\"" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" Jan 03 03:16:33 crc kubenswrapper[4746]: I0103 03:16:33.127118 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" event={"ID":"9887b8e9-c459-4d60-833d-f6dd645e878a","Type":"ContainerStarted","Data":"ff7750d9e6acbda77f1fc997d0ff34a9451d0f1c29961d0887820f9bc999f052"} Jan 03 03:16:33 crc kubenswrapper[4746]: I0103 03:16:33.127429 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" event={"ID":"9887b8e9-c459-4d60-833d-f6dd645e878a","Type":"ContainerStarted","Data":"53d63a8c86404711c25ed113d72c1650395626e8b83cd22483833992af395c4a"} Jan 03 03:16:34 crc kubenswrapper[4746]: I0103 03:16:34.134447 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-plg55_7938adea-5f3a-4bfa-8776-f8b06ce7219e/kube-multus/1.log" Jan 03 03:16:34 crc kubenswrapper[4746]: I0103 03:16:34.136460 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-plg55_7938adea-5f3a-4bfa-8776-f8b06ce7219e/kube-multus/0.log" Jan 03 03:16:34 crc kubenswrapper[4746]: I0103 03:16:34.136784 4746 generic.go:334] "Generic (PLEG): container finished" podID="7938adea-5f3a-4bfa-8776-f8b06ce7219e" containerID="46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93" exitCode=1 Jan 03 03:16:34 crc kubenswrapper[4746]: I0103 03:16:34.136854 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-plg55" event={"ID":"7938adea-5f3a-4bfa-8776-f8b06ce7219e","Type":"ContainerDied","Data":"46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93"} Jan 03 03:16:34 crc kubenswrapper[4746]: I0103 03:16:34.137256 4746 scope.go:117] "RemoveContainer" containerID="7697cbd1fa1681724804682e82b64a125bf907b5da9592ad8552241de27b9277" Jan 03 03:16:34 crc kubenswrapper[4746]: I0103 03:16:34.138091 4746 scope.go:117] "RemoveContainer" containerID="46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93" Jan 03 03:16:34 crc kubenswrapper[4746]: E0103 03:16:34.138424 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-plg55_openshift-multus(7938adea-5f3a-4bfa-8776-f8b06ce7219e)\"" pod="openshift-multus/multus-plg55" podUID="7938adea-5f3a-4bfa-8776-f8b06ce7219e" Jan 03 03:16:34 crc kubenswrapper[4746]: I0103 03:16:34.172862 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7lbv9" podStartSLOduration=95.172834575 podStartE2EDuration="1m35.172834575s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:33.146524008 +0000 UTC m=+112.996414353" watchObservedRunningTime="2026-01-03 03:16:34.172834575 +0000 UTC m=+114.022724900" Jan 03 03:16:34 crc kubenswrapper[4746]: I0103 03:16:34.464545 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:34 crc kubenswrapper[4746]: I0103 03:16:34.464558 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:34 crc kubenswrapper[4746]: I0103 03:16:34.464589 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:34 crc kubenswrapper[4746]: I0103 03:16:34.464704 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:34 crc kubenswrapper[4746]: E0103 03:16:34.465244 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:34 crc kubenswrapper[4746]: E0103 03:16:34.465439 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:34 crc kubenswrapper[4746]: E0103 03:16:34.465600 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:34 crc kubenswrapper[4746]: E0103 03:16:34.465690 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:35 crc kubenswrapper[4746]: I0103 03:16:35.143973 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-plg55_7938adea-5f3a-4bfa-8776-f8b06ce7219e/kube-multus/1.log" Jan 03 03:16:36 crc kubenswrapper[4746]: I0103 03:16:36.464748 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:36 crc kubenswrapper[4746]: I0103 03:16:36.464817 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:36 crc kubenswrapper[4746]: I0103 03:16:36.464873 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:36 crc kubenswrapper[4746]: I0103 03:16:36.464748 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:36 crc kubenswrapper[4746]: E0103 03:16:36.465102 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:36 crc kubenswrapper[4746]: E0103 03:16:36.465264 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:36 crc kubenswrapper[4746]: E0103 03:16:36.465558 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:36 crc kubenswrapper[4746]: E0103 03:16:36.466120 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:38 crc kubenswrapper[4746]: I0103 03:16:38.464355 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:38 crc kubenswrapper[4746]: I0103 03:16:38.464448 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:38 crc kubenswrapper[4746]: I0103 03:16:38.464496 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:38 crc kubenswrapper[4746]: I0103 03:16:38.464515 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:38 crc kubenswrapper[4746]: E0103 03:16:38.464745 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:38 crc kubenswrapper[4746]: E0103 03:16:38.464911 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:38 crc kubenswrapper[4746]: E0103 03:16:38.465086 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:38 crc kubenswrapper[4746]: E0103 03:16:38.465171 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:40 crc kubenswrapper[4746]: E0103 03:16:40.441217 4746 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 03 03:16:40 crc kubenswrapper[4746]: I0103 03:16:40.464206 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:40 crc kubenswrapper[4746]: I0103 03:16:40.464274 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:40 crc kubenswrapper[4746]: E0103 03:16:40.466353 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:40 crc kubenswrapper[4746]: I0103 03:16:40.466418 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:40 crc kubenswrapper[4746]: I0103 03:16:40.466428 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:40 crc kubenswrapper[4746]: E0103 03:16:40.466487 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:40 crc kubenswrapper[4746]: E0103 03:16:40.466788 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:40 crc kubenswrapper[4746]: E0103 03:16:40.466948 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:40 crc kubenswrapper[4746]: E0103 03:16:40.549752 4746 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 03:16:42 crc kubenswrapper[4746]: I0103 03:16:42.464638 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:42 crc kubenswrapper[4746]: I0103 03:16:42.464734 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:42 crc kubenswrapper[4746]: I0103 03:16:42.464767 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:42 crc kubenswrapper[4746]: I0103 03:16:42.464746 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:42 crc kubenswrapper[4746]: E0103 03:16:42.464851 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:42 crc kubenswrapper[4746]: E0103 03:16:42.465212 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:42 crc kubenswrapper[4746]: E0103 03:16:42.465323 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:42 crc kubenswrapper[4746]: E0103 03:16:42.465438 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:44 crc kubenswrapper[4746]: I0103 03:16:44.466005 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:44 crc kubenswrapper[4746]: E0103 03:16:44.466217 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:44 crc kubenswrapper[4746]: I0103 03:16:44.466293 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:44 crc kubenswrapper[4746]: I0103 03:16:44.466316 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:44 crc kubenswrapper[4746]: I0103 03:16:44.466378 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:44 crc kubenswrapper[4746]: E0103 03:16:44.466459 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:44 crc kubenswrapper[4746]: E0103 03:16:44.466580 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:44 crc kubenswrapper[4746]: E0103 03:16:44.466692 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:45 crc kubenswrapper[4746]: E0103 03:16:45.551539 4746 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 03:16:46 crc kubenswrapper[4746]: I0103 03:16:46.464601 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:46 crc kubenswrapper[4746]: I0103 03:16:46.464733 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:46 crc kubenswrapper[4746]: I0103 03:16:46.464761 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:46 crc kubenswrapper[4746]: I0103 03:16:46.464893 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:46 crc kubenswrapper[4746]: E0103 03:16:46.464883 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:46 crc kubenswrapper[4746]: E0103 03:16:46.464948 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:46 crc kubenswrapper[4746]: E0103 03:16:46.465076 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:46 crc kubenswrapper[4746]: E0103 03:16:46.465156 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:47 crc kubenswrapper[4746]: I0103 03:16:47.466061 4746 scope.go:117] "RemoveContainer" containerID="73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add" Jan 03 03:16:48 crc kubenswrapper[4746]: I0103 03:16:48.193801 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/3.log" Jan 03 03:16:48 crc kubenswrapper[4746]: I0103 03:16:48.196705 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerStarted","Data":"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11"} Jan 03 03:16:48 crc kubenswrapper[4746]: I0103 03:16:48.197506 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:16:48 crc kubenswrapper[4746]: I0103 03:16:48.235764 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podStartSLOduration=109.235748068 podStartE2EDuration="1m49.235748068s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:16:48.235634695 +0000 UTC m=+128.085525010" watchObservedRunningTime="2026-01-03 03:16:48.235748068 +0000 UTC m=+128.085638383" Jan 03 03:16:48 crc kubenswrapper[4746]: I0103 03:16:48.378362 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-57tv2"] Jan 03 03:16:48 crc kubenswrapper[4746]: I0103 03:16:48.378471 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:48 crc kubenswrapper[4746]: E0103 03:16:48.378564 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:48 crc kubenswrapper[4746]: I0103 03:16:48.464559 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:48 crc kubenswrapper[4746]: I0103 03:16:48.464681 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:48 crc kubenswrapper[4746]: I0103 03:16:48.464552 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:48 crc kubenswrapper[4746]: E0103 03:16:48.464826 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:48 crc kubenswrapper[4746]: E0103 03:16:48.464737 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:48 crc kubenswrapper[4746]: E0103 03:16:48.465038 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:49 crc kubenswrapper[4746]: I0103 03:16:49.465536 4746 scope.go:117] "RemoveContainer" containerID="46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93" Jan 03 03:16:50 crc kubenswrapper[4746]: I0103 03:16:50.206336 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-plg55_7938adea-5f3a-4bfa-8776-f8b06ce7219e/kube-multus/1.log" Jan 03 03:16:50 crc kubenswrapper[4746]: I0103 03:16:50.206391 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-plg55" event={"ID":"7938adea-5f3a-4bfa-8776-f8b06ce7219e","Type":"ContainerStarted","Data":"54f9bfe29db937bd01a081ab29a78fa38cfa432fc695ab275c1daf35535f1a60"} Jan 03 03:16:50 crc kubenswrapper[4746]: I0103 03:16:50.464961 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:50 crc kubenswrapper[4746]: I0103 03:16:50.465054 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:50 crc kubenswrapper[4746]: I0103 03:16:50.465060 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:50 crc kubenswrapper[4746]: I0103 03:16:50.465067 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:50 crc kubenswrapper[4746]: E0103 03:16:50.465689 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:50 crc kubenswrapper[4746]: E0103 03:16:50.465889 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:50 crc kubenswrapper[4746]: E0103 03:16:50.466086 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:50 crc kubenswrapper[4746]: E0103 03:16:50.466133 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:50 crc kubenswrapper[4746]: E0103 03:16:50.552439 4746 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 03:16:52 crc kubenswrapper[4746]: I0103 03:16:52.464327 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:52 crc kubenswrapper[4746]: I0103 03:16:52.464381 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:52 crc kubenswrapper[4746]: I0103 03:16:52.464422 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:52 crc kubenswrapper[4746]: E0103 03:16:52.464632 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:52 crc kubenswrapper[4746]: I0103 03:16:52.464842 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:52 crc kubenswrapper[4746]: E0103 03:16:52.464875 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:52 crc kubenswrapper[4746]: E0103 03:16:52.465001 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:52 crc kubenswrapper[4746]: E0103 03:16:52.465211 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:54 crc kubenswrapper[4746]: I0103 03:16:54.464524 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:54 crc kubenswrapper[4746]: I0103 03:16:54.464599 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:54 crc kubenswrapper[4746]: E0103 03:16:54.464822 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 03 03:16:54 crc kubenswrapper[4746]: I0103 03:16:54.464846 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:54 crc kubenswrapper[4746]: E0103 03:16:54.465151 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 03 03:16:54 crc kubenswrapper[4746]: E0103 03:16:54.465361 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-57tv2" podUID="28a574f3-8744-4d57-aada-e4b328244e19" Jan 03 03:16:54 crc kubenswrapper[4746]: I0103 03:16:54.464769 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:54 crc kubenswrapper[4746]: E0103 03:16:54.466556 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 03 03:16:56 crc kubenswrapper[4746]: I0103 03:16:56.464199 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:16:56 crc kubenswrapper[4746]: I0103 03:16:56.464334 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:16:56 crc kubenswrapper[4746]: I0103 03:16:56.464643 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:16:56 crc kubenswrapper[4746]: I0103 03:16:56.465080 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:16:56 crc kubenswrapper[4746]: I0103 03:16:56.468487 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 03 03:16:56 crc kubenswrapper[4746]: I0103 03:16:56.468966 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 03 03:16:56 crc kubenswrapper[4746]: I0103 03:16:56.469418 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 03 03:16:56 crc kubenswrapper[4746]: I0103 03:16:56.469562 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 03 03:16:56 crc kubenswrapper[4746]: I0103 03:16:56.470046 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 03 03:16:56 crc kubenswrapper[4746]: I0103 03:16:56.470257 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 03 03:17:01 crc kubenswrapper[4746]: I0103 03:17:01.374512 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:17:01 crc kubenswrapper[4746]: I0103 03:17:01.374643 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.671736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.734592 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j9fzb"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.735581 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.741071 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-58c52"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.741322 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.741764 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.742028 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.742212 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.743399 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.745299 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.745942 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.747937 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.748128 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.749057 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.749496 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.752258 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bzztq"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.752984 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.753969 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.754242 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.755627 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.756479 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.757737 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.757945 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-d58zr"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.758712 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.759815 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.760181 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.760524 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sw9vc"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.761198 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.772008 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.772792 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.773526 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.773607 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.773791 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.773870 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.773890 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.774397 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.774496 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.774437 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.774568 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.775051 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.775077 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.775173 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.775264 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.776324 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.776599 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.776689 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.776829 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.776976 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.777163 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.777356 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.777593 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.777700 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.778855 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.778953 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.778948 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.779151 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.779738 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.779951 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.780644 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.780889 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.780947 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.789894 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.790413 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.790895 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fcxcc"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.791170 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.791807 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-j2jgm"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.792687 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-j2jgm" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.793757 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.794381 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.808896 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.812882 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.816975 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.827110 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.868640 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.871373 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fws24"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.871680 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-js77f"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.872168 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.872226 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.872938 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.878766 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.879182 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.879310 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.879349 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.879432 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.879520 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.879682 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.879921 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.879961 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.880048 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.880148 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.879927 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.880500 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.880627 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.880751 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.880779 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.882382 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbvcn\" (UniqueName: \"kubernetes.io/projected/9d4e4b7f-a115-44f6-93d2-4649b99340c3-kube-api-access-dbvcn\") pod \"openshift-config-operator-7777fb866f-sp7t5\" (UID: \"9d4e4b7f-a115-44f6-93d2-4649b99340c3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.882432 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4e4b7f-a115-44f6-93d2-4649b99340c3-serving-cert\") pod \"openshift-config-operator-7777fb866f-sp7t5\" (UID: \"9d4e4b7f-a115-44f6-93d2-4649b99340c3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.882452 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d4e4b7f-a115-44f6-93d2-4649b99340c3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sp7t5\" (UID: \"9d4e4b7f-a115-44f6-93d2-4649b99340c3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.884277 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.884455 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.884582 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.885405 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.885743 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.886129 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.886238 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.886290 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.886458 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.887019 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.887170 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.887302 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.887312 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.887413 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.887517 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.887633 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.887771 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.887785 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.888201 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.888386 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.888492 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.888587 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.889401 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.889882 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.890876 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.892862 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.893818 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.895068 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.902894 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.903643 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lnzfg"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.904088 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.904352 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.904620 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.905033 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.905329 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.905477 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-lnzfg" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.905895 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.906340 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.906928 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ndqm2"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.907291 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.908492 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.908582 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.908713 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.908724 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.909123 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.909550 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.915912 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.916566 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-khnmh"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.917431 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-khnmh" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.917718 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.918894 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.953923 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.956968 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-58c52"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.957191 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.957999 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.961286 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j9fzb"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.980197 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nfw8x"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.981337 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt"] Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.982035 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.982530 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nfw8x" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984283 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45e8e97f-f055-4a33-94fa-687aa5893d06-serving-cert\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984319 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-serving-cert\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984344 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7dx9\" (UniqueName: \"kubernetes.io/projected/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-kube-api-access-r7dx9\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984368 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-config\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984385 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-etcd-serving-ca\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984403 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n84l5\" (UniqueName: \"kubernetes.io/projected/42098287-d6c9-4d15-a33b-2dbf74558a73-kube-api-access-n84l5\") pod \"cluster-samples-operator-665b6dd947-dtvtx\" (UID: \"42098287-d6c9-4d15-a33b-2dbf74558a73\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984418 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27d52d81-bec6-495c-b080-d3244284d228-serving-cert\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984435 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-config\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984453 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a51c938-dfaf-4222-afb6-0cd79e445537-config\") pod \"machine-api-operator-5694c8668f-d58zr\" (UID: \"7a51c938-dfaf-4222-afb6-0cd79e445537\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984484 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-service-ca-bundle\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984499 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984514 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9864171-c848-4905-96fd-232f0f0df7f9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qxk8h\" (UID: \"d9864171-c848-4905-96fd-232f0f0df7f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984531 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984556 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/27d52d81-bec6-495c-b080-d3244284d228-encryption-config\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984572 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a51c938-dfaf-4222-afb6-0cd79e445537-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-d58zr\" (UID: \"7a51c938-dfaf-4222-afb6-0cd79e445537\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984589 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984606 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1339946-fe37-4d87-b959-fd1349323679-config\") pod \"console-operator-58897d9998-fcxcc\" (UID: \"d1339946-fe37-4d87-b959-fd1349323679\") " pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984622 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-etcd-client\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984637 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/27d52d81-bec6-495c-b080-d3244284d228-etcd-client\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984668 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984685 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984704 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-serving-cert\") pod \"route-controller-manager-6576b87f9c-h87hg\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984720 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-client-ca\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984737 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-audit-policies\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984756 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-encryption-config\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984774 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984800 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cbms\" (UniqueName: \"kubernetes.io/projected/27d52d81-bec6-495c-b080-d3244284d228-kube-api-access-6cbms\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984817 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984835 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmcrb\" (UniqueName: \"kubernetes.io/projected/d1339946-fe37-4d87-b959-fd1349323679-kube-api-access-kmcrb\") pod \"console-operator-58897d9998-fcxcc\" (UID: \"d1339946-fe37-4d87-b959-fd1349323679\") " pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984854 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984873 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984893 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984914 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a303db-5f4e-431d-99c8-8e0b57386a26-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wndv7\" (UID: \"28a303db-5f4e-431d-99c8-8e0b57386a26\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984935 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a303db-5f4e-431d-99c8-8e0b57386a26-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wndv7\" (UID: \"28a303db-5f4e-431d-99c8-8e0b57386a26\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984952 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984970 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.984988 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqprk\" (UniqueName: \"kubernetes.io/projected/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-kube-api-access-vqprk\") pod \"route-controller-manager-6576b87f9c-h87hg\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985010 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/42098287-d6c9-4d15-a33b-2dbf74558a73-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dtvtx\" (UID: \"42098287-d6c9-4d15-a33b-2dbf74558a73\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985026 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/27d52d81-bec6-495c-b080-d3244284d228-audit-dir\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985049 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985072 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9xl\" (UniqueName: \"kubernetes.io/projected/6d8cd430-5229-4772-8c83-9fbdbeaf54de-kube-api-access-cs9xl\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985089 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw5zf\" (UniqueName: \"kubernetes.io/projected/f93d60e3-b792-4b40-88fd-b979e91021f3-kube-api-access-kw5zf\") pod \"downloads-7954f5f757-j2jgm\" (UID: \"f93d60e3-b792-4b40-88fd-b979e91021f3\") " pod="openshift-console/downloads-7954f5f757-j2jgm" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985116 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbvcn\" (UniqueName: \"kubernetes.io/projected/9d4e4b7f-a115-44f6-93d2-4649b99340c3-kube-api-access-dbvcn\") pod \"openshift-config-operator-7777fb866f-sp7t5\" (UID: \"9d4e4b7f-a115-44f6-93d2-4649b99340c3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985155 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-audit-policies\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985172 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985191 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1339946-fe37-4d87-b959-fd1349323679-trusted-ca\") pod \"console-operator-58897d9998-fcxcc\" (UID: \"d1339946-fe37-4d87-b959-fd1349323679\") " pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985207 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d8cd430-5229-4772-8c83-9fbdbeaf54de-audit-dir\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985225 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-config\") pod \"route-controller-manager-6576b87f9c-h87hg\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985241 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkzdm\" (UniqueName: \"kubernetes.io/projected/45e8e97f-f055-4a33-94fa-687aa5893d06-kube-api-access-fkzdm\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985257 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-client-ca\") pod \"route-controller-manager-6576b87f9c-h87hg\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985274 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-config\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985290 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-serving-cert\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985309 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-image-import-ca\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985331 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4e4b7f-a115-44f6-93d2-4649b99340c3-serving-cert\") pod \"openshift-config-operator-7777fb866f-sp7t5\" (UID: \"9d4e4b7f-a115-44f6-93d2-4649b99340c3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985350 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d4e4b7f-a115-44f6-93d2-4649b99340c3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sp7t5\" (UID: \"9d4e4b7f-a115-44f6-93d2-4649b99340c3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985372 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9864171-c848-4905-96fd-232f0f0df7f9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qxk8h\" (UID: \"d9864171-c848-4905-96fd-232f0f0df7f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985390 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7a51c938-dfaf-4222-afb6-0cd79e445537-images\") pod \"machine-api-operator-5694c8668f-d58zr\" (UID: \"7a51c938-dfaf-4222-afb6-0cd79e445537\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985408 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62kgp\" (UniqueName: \"kubernetes.io/projected/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-kube-api-access-62kgp\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985426 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggftc\" (UniqueName: \"kubernetes.io/projected/d9864171-c848-4905-96fd-232f0f0df7f9-kube-api-access-ggftc\") pod \"openshift-controller-manager-operator-756b6f6bc6-qxk8h\" (UID: \"d9864171-c848-4905-96fd-232f0f0df7f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985443 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/27d52d81-bec6-495c-b080-d3244284d228-node-pullsecrets\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985466 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-audit\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985484 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-audit-dir\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985499 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1339946-fe37-4d87-b959-fd1349323679-serving-cert\") pod \"console-operator-58897d9998-fcxcc\" (UID: \"d1339946-fe37-4d87-b959-fd1349323679\") " pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985520 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985540 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985559 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ltz\" (UniqueName: \"kubernetes.io/projected/7a51c938-dfaf-4222-afb6-0cd79e445537-kube-api-access-b8ltz\") pod \"machine-api-operator-5694c8668f-d58zr\" (UID: \"7a51c938-dfaf-4222-afb6-0cd79e445537\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.985576 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb9m4\" (UniqueName: \"kubernetes.io/projected/28a303db-5f4e-431d-99c8-8e0b57386a26-kube-api-access-rb9m4\") pod \"openshift-apiserver-operator-796bbdcf4f-wndv7\" (UID: \"28a303db-5f4e-431d-99c8-8e0b57386a26\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.991433 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d4e4b7f-a115-44f6-93d2-4649b99340c3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sp7t5\" (UID: \"9d4e4b7f-a115-44f6-93d2-4649b99340c3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.992334 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.994471 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.994618 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.995536 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.996048 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.996315 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.998348 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.998745 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 03 03:17:02 crc kubenswrapper[4746]: I0103 03:17:02.998890 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.003205 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wwvt9"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.004813 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.005524 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.005841 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.005576 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.006053 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.007225 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.009805 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.011905 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.012677 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.013893 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-k57gl"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.014033 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.014872 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.018031 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.019708 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mpsxq"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.021215 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.021894 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.022239 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.025456 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.027474 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.029137 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.030113 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.033732 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.034282 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.034706 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bzztq"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.034830 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.035841 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.037224 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.037814 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.038559 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.040164 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lkd2p"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.041786 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lkd2p" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.042172 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.043231 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.045585 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.045668 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4e4b7f-a115-44f6-93d2-4649b99340c3-serving-cert\") pod \"openshift-config-operator-7777fb866f-sp7t5\" (UID: \"9d4e4b7f-a115-44f6-93d2-4649b99340c3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.047492 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.049796 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-js77f"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.051002 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.052776 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.053901 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.055224 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sw9vc"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.057285 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ndqm2"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.059631 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.061258 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-d58zr"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.063057 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.064182 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.067006 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.078433 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.081115 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.083017 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nfw8x"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.083855 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lnzfg"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.085723 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wwvt9"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086237 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a303db-5f4e-431d-99c8-8e0b57386a26-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wndv7\" (UID: \"28a303db-5f4e-431d-99c8-8e0b57386a26\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086268 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086292 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086314 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqprk\" (UniqueName: \"kubernetes.io/projected/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-kube-api-access-vqprk\") pod \"route-controller-manager-6576b87f9c-h87hg\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086335 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/42098287-d6c9-4d15-a33b-2dbf74558a73-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dtvtx\" (UID: \"42098287-d6c9-4d15-a33b-2dbf74558a73\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086358 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm8n4\" (UniqueName: \"kubernetes.io/projected/802e491b-8f4e-4cc7-b6df-756478ebbe1e-kube-api-access-cm8n4\") pod \"ingress-operator-5b745b69d9-mlkvc\" (UID: \"802e491b-8f4e-4cc7-b6df-756478ebbe1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086378 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cz9m\" (UniqueName: \"kubernetes.io/projected/84efb631-0927-4470-9a6c-9af70fbdb9a0-kube-api-access-7cz9m\") pod \"migrator-59844c95c7-nfw8x\" (UID: \"84efb631-0927-4470-9a6c-9af70fbdb9a0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nfw8x" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086399 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/27d52d81-bec6-495c-b080-d3244284d228-audit-dir\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086461 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/844f9e49-a60f-445f-a3a2-c92bb3800691-config\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086481 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5a89946-a489-411e-8e5d-07e166de5088-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-khnmh\" (UID: \"d5a89946-a489-411e-8e5d-07e166de5088\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-khnmh" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086501 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/844f9e49-a60f-445f-a3a2-c92bb3800691-serving-cert\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086522 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0f102bc-480f-4c8f-b3e3-7afa141e912c-trusted-ca-bundle\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086543 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086560 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9xl\" (UniqueName: \"kubernetes.io/projected/6d8cd430-5229-4772-8c83-9fbdbeaf54de-kube-api-access-cs9xl\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086579 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw5zf\" (UniqueName: \"kubernetes.io/projected/f93d60e3-b792-4b40-88fd-b979e91021f3-kube-api-access-kw5zf\") pod \"downloads-7954f5f757-j2jgm\" (UID: \"f93d60e3-b792-4b40-88fd-b979e91021f3\") " pod="openshift-console/downloads-7954f5f757-j2jgm" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086599 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/844f9e49-a60f-445f-a3a2-c92bb3800691-etcd-service-ca\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086619 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf9tk\" (UniqueName: \"kubernetes.io/projected/844f9e49-a60f-445f-a3a2-c92bb3800691-kube-api-access-sf9tk\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086678 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-audit-policies\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086703 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086722 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1339946-fe37-4d87-b959-fd1349323679-trusted-ca\") pod \"console-operator-58897d9998-fcxcc\" (UID: \"d1339946-fe37-4d87-b959-fd1349323679\") " pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086742 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d8cd430-5229-4772-8c83-9fbdbeaf54de-audit-dir\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086761 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-config\") pod \"route-controller-manager-6576b87f9c-h87hg\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086780 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkzdm\" (UniqueName: \"kubernetes.io/projected/45e8e97f-f055-4a33-94fa-687aa5893d06-kube-api-access-fkzdm\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086801 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-client-ca\") pod \"route-controller-manager-6576b87f9c-h87hg\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086819 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-config\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086840 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7b4792-fd79-4c60-bafa-c9f05f0e0deb-config\") pod \"kube-apiserver-operator-766d6c64bb-2d9l9\" (UID: \"ff7b4792-fd79-4c60-bafa-c9f05f0e0deb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086863 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-serving-cert\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086882 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff7b4792-fd79-4c60-bafa-c9f05f0e0deb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2d9l9\" (UID: \"ff7b4792-fd79-4c60-bafa-c9f05f0e0deb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086900 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/802e491b-8f4e-4cc7-b6df-756478ebbe1e-trusted-ca\") pod \"ingress-operator-5b745b69d9-mlkvc\" (UID: \"802e491b-8f4e-4cc7-b6df-756478ebbe1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086919 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-image-import-ca\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086938 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/26985b93-c203-432e-b302-9a73c40803e8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-swswl\" (UID: \"26985b93-c203-432e-b302-9a73c40803e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086960 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b5b425d-89ff-4cf3-97c4-7263f3a345cf-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z2jrv\" (UID: \"1b5b425d-89ff-4cf3-97c4-7263f3a345cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.086983 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9864171-c848-4905-96fd-232f0f0df7f9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qxk8h\" (UID: \"d9864171-c848-4905-96fd-232f0f0df7f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087007 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7a51c938-dfaf-4222-afb6-0cd79e445537-images\") pod \"machine-api-operator-5694c8668f-d58zr\" (UID: \"7a51c938-dfaf-4222-afb6-0cd79e445537\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087034 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62kgp\" (UniqueName: \"kubernetes.io/projected/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-kube-api-access-62kgp\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087059 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggftc\" (UniqueName: \"kubernetes.io/projected/d9864171-c848-4905-96fd-232f0f0df7f9-kube-api-access-ggftc\") pod \"openshift-controller-manager-operator-756b6f6bc6-qxk8h\" (UID: \"d9864171-c848-4905-96fd-232f0f0df7f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087079 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/27d52d81-bec6-495c-b080-d3244284d228-node-pullsecrets\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087096 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc99dd78-6470-4b4c-8db2-d01982e37009-metrics-tls\") pod \"dns-operator-744455d44c-lnzfg\" (UID: \"fc99dd78-6470-4b4c-8db2-d01982e37009\") " pod="openshift-dns-operator/dns-operator-744455d44c-lnzfg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087115 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x4bb\" (UniqueName: \"kubernetes.io/projected/e0f102bc-480f-4c8f-b3e3-7afa141e912c-kube-api-access-4x4bb\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087142 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-audit\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087160 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1edd4480-eb8f-4841-b34b-df768497de26-config\") pod \"machine-approver-56656f9798-v4lzc\" (UID: \"1edd4480-eb8f-4841-b34b-df768497de26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087177 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-audit-dir\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087194 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1339946-fe37-4d87-b959-fd1349323679-serving-cert\") pod \"console-operator-58897d9998-fcxcc\" (UID: \"d1339946-fe37-4d87-b959-fd1349323679\") " pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087211 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0f102bc-480f-4c8f-b3e3-7afa141e912c-console-config\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087230 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087250 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26985b93-c203-432e-b302-9a73c40803e8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-swswl\" (UID: \"26985b93-c203-432e-b302-9a73c40803e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087268 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/802e491b-8f4e-4cc7-b6df-756478ebbe1e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mlkvc\" (UID: \"802e491b-8f4e-4cc7-b6df-756478ebbe1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087286 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087305 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1edd4480-eb8f-4841-b34b-df768497de26-machine-approver-tls\") pod \"machine-approver-56656f9798-v4lzc\" (UID: \"1edd4480-eb8f-4841-b34b-df768497de26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087325 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8ltz\" (UniqueName: \"kubernetes.io/projected/7a51c938-dfaf-4222-afb6-0cd79e445537-kube-api-access-b8ltz\") pod \"machine-api-operator-5694c8668f-d58zr\" (UID: \"7a51c938-dfaf-4222-afb6-0cd79e445537\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087343 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb9m4\" (UniqueName: \"kubernetes.io/projected/28a303db-5f4e-431d-99c8-8e0b57386a26-kube-api-access-rb9m4\") pod \"openshift-apiserver-operator-796bbdcf4f-wndv7\" (UID: \"28a303db-5f4e-431d-99c8-8e0b57386a26\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087362 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26985b93-c203-432e-b302-9a73c40803e8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-swswl\" (UID: \"26985b93-c203-432e-b302-9a73c40803e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087382 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fca306c-5880-4915-8d7e-c4e9df65d59e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hxwk5\" (UID: \"8fca306c-5880-4915-8d7e-c4e9df65d59e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087400 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0f102bc-480f-4c8f-b3e3-7afa141e912c-console-oauth-config\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087397 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087419 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0f102bc-480f-4c8f-b3e3-7afa141e912c-oauth-serving-cert\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087441 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45e8e97f-f055-4a33-94fa-687aa5893d06-serving-cert\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087460 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-serving-cert\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087480 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7dx9\" (UniqueName: \"kubernetes.io/projected/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-kube-api-access-r7dx9\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087520 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-config\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087538 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-etcd-serving-ca\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087555 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/802e491b-8f4e-4cc7-b6df-756478ebbe1e-metrics-tls\") pod \"ingress-operator-5b745b69d9-mlkvc\" (UID: \"802e491b-8f4e-4cc7-b6df-756478ebbe1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087573 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n84l5\" (UniqueName: \"kubernetes.io/projected/42098287-d6c9-4d15-a33b-2dbf74558a73-kube-api-access-n84l5\") pod \"cluster-samples-operator-665b6dd947-dtvtx\" (UID: \"42098287-d6c9-4d15-a33b-2dbf74558a73\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087589 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27d52d81-bec6-495c-b080-d3244284d228-serving-cert\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087607 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-config\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087627 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a51c938-dfaf-4222-afb6-0cd79e445537-config\") pod \"machine-api-operator-5694c8668f-d58zr\" (UID: \"7a51c938-dfaf-4222-afb6-0cd79e445537\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087643 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1edd4480-eb8f-4841-b34b-df768497de26-auth-proxy-config\") pod \"machine-approver-56656f9798-v4lzc\" (UID: \"1edd4480-eb8f-4841-b34b-df768497de26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087682 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/844f9e49-a60f-445f-a3a2-c92bb3800691-etcd-ca\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087700 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdwxp\" (UniqueName: \"kubernetes.io/projected/1edd4480-eb8f-4841-b34b-df768497de26-kube-api-access-sdwxp\") pod \"machine-approver-56656f9798-v4lzc\" (UID: \"1edd4480-eb8f-4841-b34b-df768497de26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087726 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-service-ca-bundle\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087745 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fca306c-5880-4915-8d7e-c4e9df65d59e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hxwk5\" (UID: \"8fca306c-5880-4915-8d7e-c4e9df65d59e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087765 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087787 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9864171-c848-4905-96fd-232f0f0df7f9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qxk8h\" (UID: \"d9864171-c848-4905-96fd-232f0f0df7f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087807 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087823 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f102bc-480f-4c8f-b3e3-7afa141e912c-console-serving-cert\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087841 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0f102bc-480f-4c8f-b3e3-7afa141e912c-service-ca\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087954 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a303db-5f4e-431d-99c8-8e0b57386a26-config\") pod \"openshift-apiserver-operator-796bbdcf4f-wndv7\" (UID: \"28a303db-5f4e-431d-99c8-8e0b57386a26\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087949 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-j2jgm"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.088142 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d8cd430-5229-4772-8c83-9fbdbeaf54de-audit-dir\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.088428 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089097 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089320 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-audit-policies\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.087868 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/27d52d81-bec6-495c-b080-d3244284d228-encryption-config\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089419 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a51c938-dfaf-4222-afb6-0cd79e445537-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-d58zr\" (UID: \"7a51c938-dfaf-4222-afb6-0cd79e445537\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089424 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089493 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-config\") pod \"route-controller-manager-6576b87f9c-h87hg\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089593 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b5b425d-89ff-4cf3-97c4-7263f3a345cf-config\") pod \"kube-controller-manager-operator-78b949d7b-z2jrv\" (UID: \"1b5b425d-89ff-4cf3-97c4-7263f3a345cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089624 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff7b4792-fd79-4c60-bafa-c9f05f0e0deb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2d9l9\" (UID: \"ff7b4792-fd79-4c60-bafa-c9f05f0e0deb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089673 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089703 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1339946-fe37-4d87-b959-fd1349323679-config\") pod \"console-operator-58897d9998-fcxcc\" (UID: \"d1339946-fe37-4d87-b959-fd1349323679\") " pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089734 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbtp5\" (UniqueName: \"kubernetes.io/projected/d5a89946-a489-411e-8e5d-07e166de5088-kube-api-access-xbtp5\") pod \"multus-admission-controller-857f4d67dd-khnmh\" (UID: \"d5a89946-a489-411e-8e5d-07e166de5088\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-khnmh" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089764 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-etcd-client\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089787 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/27d52d81-bec6-495c-b080-d3244284d228-etcd-client\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089812 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089836 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089863 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-serving-cert\") pod \"route-controller-manager-6576b87f9c-h87hg\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089888 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-client-ca\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.089912 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/844f9e49-a60f-445f-a3a2-c92bb3800691-etcd-client\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092044 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-audit-policies\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092074 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-encryption-config\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092111 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092146 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvcgm\" (UniqueName: \"kubernetes.io/projected/8fca306c-5880-4915-8d7e-c4e9df65d59e-kube-api-access-jvcgm\") pod \"kube-storage-version-migrator-operator-b67b599dd-hxwk5\" (UID: \"8fca306c-5880-4915-8d7e-c4e9df65d59e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092173 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b5b425d-89ff-4cf3-97c4-7263f3a345cf-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z2jrv\" (UID: \"1b5b425d-89ff-4cf3-97c4-7263f3a345cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.090756 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-config\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092223 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cbms\" (UniqueName: \"kubernetes.io/projected/27d52d81-bec6-495c-b080-d3244284d228-kube-api-access-6cbms\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092259 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092299 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmcrb\" (UniqueName: \"kubernetes.io/projected/d1339946-fe37-4d87-b959-fd1349323679-kube-api-access-kmcrb\") pod \"console-operator-58897d9998-fcxcc\" (UID: \"d1339946-fe37-4d87-b959-fd1349323679\") " pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092330 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8snm\" (UniqueName: \"kubernetes.io/projected/fc99dd78-6470-4b4c-8db2-d01982e37009-kube-api-access-k8snm\") pod \"dns-operator-744455d44c-lnzfg\" (UID: \"fc99dd78-6470-4b4c-8db2-d01982e37009\") " pod="openshift-dns-operator/dns-operator-744455d44c-lnzfg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092366 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092398 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092428 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092430 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27d52d81-bec6-495c-b080-d3244284d228-serving-cert\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092459 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a303db-5f4e-431d-99c8-8e0b57386a26-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wndv7\" (UID: \"28a303db-5f4e-431d-99c8-8e0b57386a26\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092496 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvkq7\" (UniqueName: \"kubernetes.io/projected/26985b93-c203-432e-b302-9a73c40803e8-kube-api-access-zvkq7\") pod \"cluster-image-registry-operator-dc59b4c8b-swswl\" (UID: \"26985b93-c203-432e-b302-9a73c40803e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092759 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7a51c938-dfaf-4222-afb6-0cd79e445537-images\") pod \"machine-api-operator-5694c8668f-d58zr\" (UID: \"7a51c938-dfaf-4222-afb6-0cd79e445537\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092770 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-serving-cert\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.090744 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1339946-fe37-4d87-b959-fd1349323679-trusted-ca\") pod \"console-operator-58897d9998-fcxcc\" (UID: \"d1339946-fe37-4d87-b959-fd1349323679\") " pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.092852 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-audit-dir\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.093114 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-audit\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.093146 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1339946-fe37-4d87-b959-fd1349323679-serving-cert\") pod \"console-operator-58897d9998-fcxcc\" (UID: \"d1339946-fe37-4d87-b959-fd1349323679\") " pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.090206 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-config\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.091208 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.091247 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fws24"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.093269 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-t84km"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.093648 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/42098287-d6c9-4d15-a33b-2dbf74558a73-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dtvtx\" (UID: \"42098287-d6c9-4d15-a33b-2dbf74558a73\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.093813 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/27d52d81-bec6-495c-b080-d3244284d228-audit-dir\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.093932 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-config\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.094317 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.094425 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-t84km" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.091443 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-etcd-serving-ca\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.095139 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-serving-cert\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.095218 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/27d52d81-bec6-495c-b080-d3244284d228-node-pullsecrets\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.095369 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.095502 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.095808 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.095873 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-service-ca-bundle\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.096136 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.096374 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45e8e97f-f055-4a33-94fa-687aa5893d06-serving-cert\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.096375 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9864171-c848-4905-96fd-232f0f0df7f9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qxk8h\" (UID: \"d9864171-c848-4905-96fd-232f0f0df7f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.096648 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.096802 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.097016 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-audit-policies\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.097668 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9864171-c848-4905-96fd-232f0f0df7f9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qxk8h\" (UID: \"d9864171-c848-4905-96fd-232f0f0df7f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.097709 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a51c938-dfaf-4222-afb6-0cd79e445537-config\") pod \"machine-api-operator-5694c8668f-d58zr\" (UID: \"7a51c938-dfaf-4222-afb6-0cd79e445537\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.098052 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a51c938-dfaf-4222-afb6-0cd79e445537-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-d58zr\" (UID: \"7a51c938-dfaf-4222-afb6-0cd79e445537\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.098184 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xgssz"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.098528 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.098775 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.098813 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1339946-fe37-4d87-b959-fd1349323679-config\") pod \"console-operator-58897d9998-fcxcc\" (UID: \"d1339946-fe37-4d87-b959-fd1349323679\") " pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.099494 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.099950 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-client-ca\") pod \"route-controller-manager-6576b87f9c-h87hg\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.100545 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.100722 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.102586 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/27d52d81-bec6-495c-b080-d3244284d228-image-import-ca\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.101588 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.101963 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/27d52d81-bec6-495c-b080-d3244284d228-encryption-config\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.102129 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-client-ca\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.101530 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-etcd-client\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.103057 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-serving-cert\") pod \"route-controller-manager-6576b87f9c-h87hg\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.103107 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a303db-5f4e-431d-99c8-8e0b57386a26-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-wndv7\" (UID: \"28a303db-5f4e-431d-99c8-8e0b57386a26\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.103230 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.103567 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-khnmh"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.103598 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.103779 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.103840 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.104959 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fcxcc"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.105067 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/27d52d81-bec6-495c-b080-d3244284d228-etcd-client\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.105164 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-encryption-config\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.106817 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.108278 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.109444 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.110709 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mpsxq"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.111868 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-t84km"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.112920 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.114037 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xgssz"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.115077 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-24ldq"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.115843 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-24ldq" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.116216 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-24ldq"] Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.117764 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.138775 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.159527 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.179828 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193184 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/26985b93-c203-432e-b302-9a73c40803e8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-swswl\" (UID: \"26985b93-c203-432e-b302-9a73c40803e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193262 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b5b425d-89ff-4cf3-97c4-7263f3a345cf-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z2jrv\" (UID: \"1b5b425d-89ff-4cf3-97c4-7263f3a345cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193332 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc99dd78-6470-4b4c-8db2-d01982e37009-metrics-tls\") pod \"dns-operator-744455d44c-lnzfg\" (UID: \"fc99dd78-6470-4b4c-8db2-d01982e37009\") " pod="openshift-dns-operator/dns-operator-744455d44c-lnzfg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193359 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x4bb\" (UniqueName: \"kubernetes.io/projected/e0f102bc-480f-4c8f-b3e3-7afa141e912c-kube-api-access-4x4bb\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193416 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1edd4480-eb8f-4841-b34b-df768497de26-config\") pod \"machine-approver-56656f9798-v4lzc\" (UID: \"1edd4480-eb8f-4841-b34b-df768497de26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193441 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0f102bc-480f-4c8f-b3e3-7afa141e912c-console-config\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193485 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26985b93-c203-432e-b302-9a73c40803e8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-swswl\" (UID: \"26985b93-c203-432e-b302-9a73c40803e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193514 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/802e491b-8f4e-4cc7-b6df-756478ebbe1e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mlkvc\" (UID: \"802e491b-8f4e-4cc7-b6df-756478ebbe1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193538 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1edd4480-eb8f-4841-b34b-df768497de26-machine-approver-tls\") pod \"machine-approver-56656f9798-v4lzc\" (UID: \"1edd4480-eb8f-4841-b34b-df768497de26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193600 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26985b93-c203-432e-b302-9a73c40803e8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-swswl\" (UID: \"26985b93-c203-432e-b302-9a73c40803e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193644 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fca306c-5880-4915-8d7e-c4e9df65d59e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hxwk5\" (UID: \"8fca306c-5880-4915-8d7e-c4e9df65d59e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193687 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0f102bc-480f-4c8f-b3e3-7afa141e912c-console-oauth-config\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193727 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0f102bc-480f-4c8f-b3e3-7afa141e912c-oauth-serving-cert\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193765 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/802e491b-8f4e-4cc7-b6df-756478ebbe1e-metrics-tls\") pod \"ingress-operator-5b745b69d9-mlkvc\" (UID: \"802e491b-8f4e-4cc7-b6df-756478ebbe1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193817 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1edd4480-eb8f-4841-b34b-df768497de26-auth-proxy-config\") pod \"machine-approver-56656f9798-v4lzc\" (UID: \"1edd4480-eb8f-4841-b34b-df768497de26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193837 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/844f9e49-a60f-445f-a3a2-c92bb3800691-etcd-ca\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193854 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdwxp\" (UniqueName: \"kubernetes.io/projected/1edd4480-eb8f-4841-b34b-df768497de26-kube-api-access-sdwxp\") pod \"machine-approver-56656f9798-v4lzc\" (UID: \"1edd4480-eb8f-4841-b34b-df768497de26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193917 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fca306c-5880-4915-8d7e-c4e9df65d59e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hxwk5\" (UID: \"8fca306c-5880-4915-8d7e-c4e9df65d59e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193936 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f102bc-480f-4c8f-b3e3-7afa141e912c-console-serving-cert\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.193953 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0f102bc-480f-4c8f-b3e3-7afa141e912c-service-ca\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194001 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b5b425d-89ff-4cf3-97c4-7263f3a345cf-config\") pod \"kube-controller-manager-operator-78b949d7b-z2jrv\" (UID: \"1b5b425d-89ff-4cf3-97c4-7263f3a345cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194019 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff7b4792-fd79-4c60-bafa-c9f05f0e0deb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2d9l9\" (UID: \"ff7b4792-fd79-4c60-bafa-c9f05f0e0deb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194074 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbtp5\" (UniqueName: \"kubernetes.io/projected/d5a89946-a489-411e-8e5d-07e166de5088-kube-api-access-xbtp5\") pod \"multus-admission-controller-857f4d67dd-khnmh\" (UID: \"d5a89946-a489-411e-8e5d-07e166de5088\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-khnmh" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194104 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/844f9e49-a60f-445f-a3a2-c92bb3800691-etcd-client\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194149 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvcgm\" (UniqueName: \"kubernetes.io/projected/8fca306c-5880-4915-8d7e-c4e9df65d59e-kube-api-access-jvcgm\") pod \"kube-storage-version-migrator-operator-b67b599dd-hxwk5\" (UID: \"8fca306c-5880-4915-8d7e-c4e9df65d59e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194167 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b5b425d-89ff-4cf3-97c4-7263f3a345cf-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z2jrv\" (UID: \"1b5b425d-89ff-4cf3-97c4-7263f3a345cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194227 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8snm\" (UniqueName: \"kubernetes.io/projected/fc99dd78-6470-4b4c-8db2-d01982e37009-kube-api-access-k8snm\") pod \"dns-operator-744455d44c-lnzfg\" (UID: \"fc99dd78-6470-4b4c-8db2-d01982e37009\") " pod="openshift-dns-operator/dns-operator-744455d44c-lnzfg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194248 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvkq7\" (UniqueName: \"kubernetes.io/projected/26985b93-c203-432e-b302-9a73c40803e8-kube-api-access-zvkq7\") pod \"cluster-image-registry-operator-dc59b4c8b-swswl\" (UID: \"26985b93-c203-432e-b302-9a73c40803e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194272 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm8n4\" (UniqueName: \"kubernetes.io/projected/802e491b-8f4e-4cc7-b6df-756478ebbe1e-kube-api-access-cm8n4\") pod \"ingress-operator-5b745b69d9-mlkvc\" (UID: \"802e491b-8f4e-4cc7-b6df-756478ebbe1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194308 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cz9m\" (UniqueName: \"kubernetes.io/projected/84efb631-0927-4470-9a6c-9af70fbdb9a0-kube-api-access-7cz9m\") pod \"migrator-59844c95c7-nfw8x\" (UID: \"84efb631-0927-4470-9a6c-9af70fbdb9a0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nfw8x" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194328 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/844f9e49-a60f-445f-a3a2-c92bb3800691-config\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194346 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5a89946-a489-411e-8e5d-07e166de5088-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-khnmh\" (UID: \"d5a89946-a489-411e-8e5d-07e166de5088\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-khnmh" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194379 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/844f9e49-a60f-445f-a3a2-c92bb3800691-serving-cert\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194399 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0f102bc-480f-4c8f-b3e3-7afa141e912c-trusted-ca-bundle\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194427 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/844f9e49-a60f-445f-a3a2-c92bb3800691-etcd-service-ca\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194462 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf9tk\" (UniqueName: \"kubernetes.io/projected/844f9e49-a60f-445f-a3a2-c92bb3800691-kube-api-access-sf9tk\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194499 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7b4792-fd79-4c60-bafa-c9f05f0e0deb-config\") pod \"kube-apiserver-operator-766d6c64bb-2d9l9\" (UID: \"ff7b4792-fd79-4c60-bafa-c9f05f0e0deb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194514 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff7b4792-fd79-4c60-bafa-c9f05f0e0deb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2d9l9\" (UID: \"ff7b4792-fd79-4c60-bafa-c9f05f0e0deb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194546 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/802e491b-8f4e-4cc7-b6df-756478ebbe1e-trusted-ca\") pod \"ingress-operator-5b745b69d9-mlkvc\" (UID: \"802e491b-8f4e-4cc7-b6df-756478ebbe1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194584 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1edd4480-eb8f-4841-b34b-df768497de26-auth-proxy-config\") pod \"machine-approver-56656f9798-v4lzc\" (UID: \"1edd4480-eb8f-4841-b34b-df768497de26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194792 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e0f102bc-480f-4c8f-b3e3-7afa141e912c-console-config\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.194070 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1edd4480-eb8f-4841-b34b-df768497de26-config\") pod \"machine-approver-56656f9798-v4lzc\" (UID: \"1edd4480-eb8f-4841-b34b-df768497de26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.197276 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0f102bc-480f-4c8f-b3e3-7afa141e912c-service-ca\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.197944 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e0f102bc-480f-4c8f-b3e3-7afa141e912c-oauth-serving-cert\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.198523 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/844f9e49-a60f-445f-a3a2-c92bb3800691-config\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.198533 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/844f9e49-a60f-445f-a3a2-c92bb3800691-etcd-ca\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.198528 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.198639 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0f102bc-480f-4c8f-b3e3-7afa141e912c-trusted-ca-bundle\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.199541 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0f102bc-480f-4c8f-b3e3-7afa141e912c-console-serving-cert\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.199630 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26985b93-c203-432e-b302-9a73c40803e8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-swswl\" (UID: \"26985b93-c203-432e-b302-9a73c40803e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.200064 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/844f9e49-a60f-445f-a3a2-c92bb3800691-etcd-service-ca\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.200679 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e0f102bc-480f-4c8f-b3e3-7afa141e912c-console-oauth-config\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.200979 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1edd4480-eb8f-4841-b34b-df768497de26-machine-approver-tls\") pod \"machine-approver-56656f9798-v4lzc\" (UID: \"1edd4480-eb8f-4841-b34b-df768497de26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.201250 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/26985b93-c203-432e-b302-9a73c40803e8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-swswl\" (UID: \"26985b93-c203-432e-b302-9a73c40803e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.204262 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff7b4792-fd79-4c60-bafa-c9f05f0e0deb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2d9l9\" (UID: \"ff7b4792-fd79-4c60-bafa-c9f05f0e0deb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.205738 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/844f9e49-a60f-445f-a3a2-c92bb3800691-serving-cert\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.207368 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff7b4792-fd79-4c60-bafa-c9f05f0e0deb-config\") pod \"kube-apiserver-operator-766d6c64bb-2d9l9\" (UID: \"ff7b4792-fd79-4c60-bafa-c9f05f0e0deb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.208498 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/844f9e49-a60f-445f-a3a2-c92bb3800691-etcd-client\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.217725 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.238450 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.251431 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/802e491b-8f4e-4cc7-b6df-756478ebbe1e-metrics-tls\") pod \"ingress-operator-5b745b69d9-mlkvc\" (UID: \"802e491b-8f4e-4cc7-b6df-756478ebbe1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.268591 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.278829 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/802e491b-8f4e-4cc7-b6df-756478ebbe1e-trusted-ca\") pod \"ingress-operator-5b745b69d9-mlkvc\" (UID: \"802e491b-8f4e-4cc7-b6df-756478ebbe1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.279503 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.298421 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.318889 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.327116 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fc99dd78-6470-4b4c-8db2-d01982e37009-metrics-tls\") pod \"dns-operator-744455d44c-lnzfg\" (UID: \"fc99dd78-6470-4b4c-8db2-d01982e37009\") " pod="openshift-dns-operator/dns-operator-744455d44c-lnzfg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.338404 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.359205 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.378531 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.398604 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.418447 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.439610 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.449589 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fca306c-5880-4915-8d7e-c4e9df65d59e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hxwk5\" (UID: \"8fca306c-5880-4915-8d7e-c4e9df65d59e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.458086 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.466650 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fca306c-5880-4915-8d7e-c4e9df65d59e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hxwk5\" (UID: \"8fca306c-5880-4915-8d7e-c4e9df65d59e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.480540 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.499558 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.518977 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.538758 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.559267 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.571914 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5a89946-a489-411e-8e5d-07e166de5088-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-khnmh\" (UID: \"d5a89946-a489-411e-8e5d-07e166de5088\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-khnmh" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.579177 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.599023 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.618818 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.638616 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.649865 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b5b425d-89ff-4cf3-97c4-7263f3a345cf-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z2jrv\" (UID: \"1b5b425d-89ff-4cf3-97c4-7263f3a345cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.659274 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.666999 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b5b425d-89ff-4cf3-97c4-7263f3a345cf-config\") pod \"kube-controller-manager-operator-78b949d7b-z2jrv\" (UID: \"1b5b425d-89ff-4cf3-97c4-7263f3a345cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.679303 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.699704 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.718473 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.738412 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.758563 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.779949 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.799171 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.818607 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.838543 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.858508 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.880070 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.935330 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbvcn\" (UniqueName: \"kubernetes.io/projected/9d4e4b7f-a115-44f6-93d2-4649b99340c3-kube-api-access-dbvcn\") pod \"openshift-config-operator-7777fb866f-sp7t5\" (UID: \"9d4e4b7f-a115-44f6-93d2-4649b99340c3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.958986 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.969603 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.978097 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 03 03:17:03 crc kubenswrapper[4746]: I0103 03:17:03.999721 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.017427 4746 request.go:700] Waited for 1.012206589s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.019912 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.039091 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.057730 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.078324 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.101855 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.119582 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.139212 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.159442 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.180951 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.200383 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.218124 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.232929 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5"] Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.238595 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.262786 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.267828 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" event={"ID":"9d4e4b7f-a115-44f6-93d2-4649b99340c3","Type":"ContainerStarted","Data":"abc1bdc7f69914bb8dd743cec4e8e39cf45dfd376b917d8e5e9c77fb60d2c4db"} Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.287862 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.299285 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.319754 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.339786 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.358427 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.379077 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.402598 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.418117 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.439903 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.459831 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.479276 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.498382 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.518700 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.538698 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.559393 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.588109 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.599514 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.618950 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.672028 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqprk\" (UniqueName: \"kubernetes.io/projected/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-kube-api-access-vqprk\") pod \"route-controller-manager-6576b87f9c-h87hg\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.694275 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8ltz\" (UniqueName: \"kubernetes.io/projected/7a51c938-dfaf-4222-afb6-0cd79e445537-kube-api-access-b8ltz\") pod \"machine-api-operator-5694c8668f-d58zr\" (UID: \"7a51c938-dfaf-4222-afb6-0cd79e445537\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.697431 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb9m4\" (UniqueName: \"kubernetes.io/projected/28a303db-5f4e-431d-99c8-8e0b57386a26-kube-api-access-rb9m4\") pod \"openshift-apiserver-operator-796bbdcf4f-wndv7\" (UID: \"28a303db-5f4e-431d-99c8-8e0b57386a26\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.709229 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.720493 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7dx9\" (UniqueName: \"kubernetes.io/projected/21795ebc-fb42-4e7d-8f3f-76dcf85ed71f-kube-api-access-r7dx9\") pod \"authentication-operator-69f744f599-bzztq\" (UID: \"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.737996 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n84l5\" (UniqueName: \"kubernetes.io/projected/42098287-d6c9-4d15-a33b-2dbf74558a73-kube-api-access-n84l5\") pod \"cluster-samples-operator-665b6dd947-dtvtx\" (UID: \"42098287-d6c9-4d15-a33b-2dbf74558a73\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.741719 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.756783 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw5zf\" (UniqueName: \"kubernetes.io/projected/f93d60e3-b792-4b40-88fd-b979e91021f3-kube-api-access-kw5zf\") pod \"downloads-7954f5f757-j2jgm\" (UID: \"f93d60e3-b792-4b40-88fd-b979e91021f3\") " pod="openshift-console/downloads-7954f5f757-j2jgm" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.781572 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggftc\" (UniqueName: \"kubernetes.io/projected/d9864171-c848-4905-96fd-232f0f0df7f9-kube-api-access-ggftc\") pod \"openshift-controller-manager-operator-756b6f6bc6-qxk8h\" (UID: \"d9864171-c848-4905-96fd-232f0f0df7f9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.797440 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62kgp\" (UniqueName: \"kubernetes.io/projected/7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08-kube-api-access-62kgp\") pod \"apiserver-7bbb656c7d-tggg2\" (UID: \"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.804182 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.818741 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9xl\" (UniqueName: \"kubernetes.io/projected/6d8cd430-5229-4772-8c83-9fbdbeaf54de-kube-api-access-cs9xl\") pod \"oauth-openshift-558db77b4-sw9vc\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.819727 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.821958 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.849396 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.856104 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cbms\" (UniqueName: \"kubernetes.io/projected/27d52d81-bec6-495c-b080-d3244284d228-kube-api-access-6cbms\") pod \"apiserver-76f77b778f-j9fzb\" (UID: \"27d52d81-bec6-495c-b080-d3244284d228\") " pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.860163 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.862085 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.862405 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-j2jgm" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.870876 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.881681 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.919979 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmcrb\" (UniqueName: \"kubernetes.io/projected/d1339946-fe37-4d87-b959-fd1349323679-kube-api-access-kmcrb\") pod \"console-operator-58897d9998-fcxcc\" (UID: \"d1339946-fe37-4d87-b959-fd1349323679\") " pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.936683 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkzdm\" (UniqueName: \"kubernetes.io/projected/45e8e97f-f055-4a33-94fa-687aa5893d06-kube-api-access-fkzdm\") pod \"controller-manager-879f6c89f-58c52\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.938697 4746 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.961242 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.978960 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.987949 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-d58zr"] Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.990236 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" Jan 03 03:17:04 crc kubenswrapper[4746]: I0103 03:17:04.998672 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.009009 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.022156 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 03 03:17:05 crc kubenswrapper[4746]: W0103 03:17:05.023597 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a51c938_dfaf_4222_afb6_0cd79e445537.slice/crio-ef8b84feecea055efeebffb118bae028f88f869069c18c1997c9098e7df0ba50 WatchSource:0}: Error finding container ef8b84feecea055efeebffb118bae028f88f869069c18c1997c9098e7df0ba50: Status 404 returned error can't find the container with id ef8b84feecea055efeebffb118bae028f88f869069c18c1997c9098e7df0ba50 Jan 03 03:17:05 crc kubenswrapper[4746]: W0103 03:17:05.026334 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28a303db_5f4e_431d_99c8_8e0b57386a26.slice/crio-22af47bd989a048bac3544af0f508004d93b09e1723c91456d153e6f0aa013d9 WatchSource:0}: Error finding container 22af47bd989a048bac3544af0f508004d93b09e1723c91456d153e6f0aa013d9: Status 404 returned error can't find the container with id 22af47bd989a048bac3544af0f508004d93b09e1723c91456d153e6f0aa013d9 Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.039141 4746 request.go:700] Waited for 1.923067855s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.042064 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.068998 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.101337 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.121898 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x4bb\" (UniqueName: \"kubernetes.io/projected/e0f102bc-480f-4c8f-b3e3-7afa141e912c-kube-api-access-4x4bb\") pod \"console-f9d7485db-fws24\" (UID: \"e0f102bc-480f-4c8f-b3e3-7afa141e912c\") " pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.128500 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.130698 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8snm\" (UniqueName: \"kubernetes.io/projected/fc99dd78-6470-4b4c-8db2-d01982e37009-kube-api-access-k8snm\") pod \"dns-operator-744455d44c-lnzfg\" (UID: \"fc99dd78-6470-4b4c-8db2-d01982e37009\") " pod="openshift-dns-operator/dns-operator-744455d44c-lnzfg" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.137044 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdwxp\" (UniqueName: \"kubernetes.io/projected/1edd4480-eb8f-4841-b34b-df768497de26-kube-api-access-sdwxp\") pod \"machine-approver-56656f9798-v4lzc\" (UID: \"1edd4480-eb8f-4841-b34b-df768497de26\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.140412 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.155644 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.158537 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/802e491b-8f4e-4cc7-b6df-756478ebbe1e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mlkvc\" (UID: \"802e491b-8f4e-4cc7-b6df-756478ebbe1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.186685 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvkq7\" (UniqueName: \"kubernetes.io/projected/26985b93-c203-432e-b302-9a73c40803e8-kube-api-access-zvkq7\") pod \"cluster-image-registry-operator-dc59b4c8b-swswl\" (UID: \"26985b93-c203-432e-b302-9a73c40803e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.191406 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.197451 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm8n4\" (UniqueName: \"kubernetes.io/projected/802e491b-8f4e-4cc7-b6df-756478ebbe1e-kube-api-access-cm8n4\") pod \"ingress-operator-5b745b69d9-mlkvc\" (UID: \"802e491b-8f4e-4cc7-b6df-756478ebbe1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.198235 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.213959 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-j2jgm"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.219277 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cz9m\" (UniqueName: \"kubernetes.io/projected/84efb631-0927-4470-9a6c-9af70fbdb9a0-kube-api-access-7cz9m\") pod \"migrator-59844c95c7-nfw8x\" (UID: \"84efb631-0927-4470-9a6c-9af70fbdb9a0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nfw8x" Jan 03 03:17:05 crc kubenswrapper[4746]: W0103 03:17:05.229505 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf93d60e3_b792_4b40_88fd_b979e91021f3.slice/crio-237c39fd90c5b7a88c95c52d9184c7c1545d8fd9435bb434f5d2720cb0434883 WatchSource:0}: Error finding container 237c39fd90c5b7a88c95c52d9184c7c1545d8fd9435bb434f5d2720cb0434883: Status 404 returned error can't find the container with id 237c39fd90c5b7a88c95c52d9184c7c1545d8fd9435bb434f5d2720cb0434883 Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.236722 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26985b93-c203-432e-b302-9a73c40803e8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-swswl\" (UID: \"26985b93-c203-432e-b302-9a73c40803e8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.236973 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.250051 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.256451 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-lnzfg" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.257706 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf9tk\" (UniqueName: \"kubernetes.io/projected/844f9e49-a60f-445f-a3a2-c92bb3800691-kube-api-access-sf9tk\") pod \"etcd-operator-b45778765-js77f\" (UID: \"844f9e49-a60f-445f-a3a2-c92bb3800691\") " pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.273951 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff7b4792-fd79-4c60-bafa-c9f05f0e0deb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2d9l9\" (UID: \"ff7b4792-fd79-4c60-bafa-c9f05f0e0deb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.303881 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbtp5\" (UniqueName: \"kubernetes.io/projected/d5a89946-a489-411e-8e5d-07e166de5088-kube-api-access-xbtp5\") pod \"multus-admission-controller-857f4d67dd-khnmh\" (UID: \"d5a89946-a489-411e-8e5d-07e166de5088\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-khnmh" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.315735 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bzztq"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.318424 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nfw8x" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.319411 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvcgm\" (UniqueName: \"kubernetes.io/projected/8fca306c-5880-4915-8d7e-c4e9df65d59e-kube-api-access-jvcgm\") pod \"kube-storage-version-migrator-operator-b67b599dd-hxwk5\" (UID: \"8fca306c-5880-4915-8d7e-c4e9df65d59e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.324105 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" event={"ID":"28a303db-5f4e-431d-99c8-8e0b57386a26","Type":"ContainerStarted","Data":"1b6985bd1eb853ddd75343095e5a8a8767905dc708eb32f89bdeebfa862b82fd"} Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.324153 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" event={"ID":"28a303db-5f4e-431d-99c8-8e0b57386a26","Type":"ContainerStarted","Data":"22af47bd989a048bac3544af0f508004d93b09e1723c91456d153e6f0aa013d9"} Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.326329 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" event={"ID":"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8","Type":"ContainerStarted","Data":"58fd30e776e425a56eee45b4559c7ccc0314a4102496aa5153736dc167b740fa"} Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.338051 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" event={"ID":"7a51c938-dfaf-4222-afb6-0cd79e445537","Type":"ContainerStarted","Data":"ef8b84feecea055efeebffb118bae028f88f869069c18c1997c9098e7df0ba50"} Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.341080 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b5b425d-89ff-4cf3-97c4-7263f3a345cf-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z2jrv\" (UID: \"1b5b425d-89ff-4cf3-97c4-7263f3a345cf\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.359683 4746 generic.go:334] "Generic (PLEG): container finished" podID="9d4e4b7f-a115-44f6-93d2-4649b99340c3" containerID="69aa90b78fe38ea6e9de1662583ee1a07c70b1908611c3849831a0992a91f6eb" exitCode=0 Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.360013 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" event={"ID":"9d4e4b7f-a115-44f6-93d2-4649b99340c3","Type":"ContainerDied","Data":"69aa90b78fe38ea6e9de1662583ee1a07c70b1908611c3849831a0992a91f6eb"} Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.362794 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-j2jgm" event={"ID":"f93d60e3-b792-4b40-88fd-b979e91021f3","Type":"ContainerStarted","Data":"237c39fd90c5b7a88c95c52d9184c7c1545d8fd9435bb434f5d2720cb0434883"} Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.366776 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sw9vc"] Jan 03 03:17:05 crc kubenswrapper[4746]: W0103 03:17:05.376846 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21795ebc_fb42_4e7d_8f3f_76dcf85ed71f.slice/crio-df45832763592192d8954de8802f404212b2d93816cd03c78f58d8ce2c43140b WatchSource:0}: Error finding container df45832763592192d8954de8802f404212b2d93816cd03c78f58d8ce2c43140b: Status 404 returned error can't find the container with id df45832763592192d8954de8802f404212b2d93816cd03c78f58d8ce2c43140b Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.432684 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f42f8303-dc13-4de1-a5ab-a96a1606c0b8-srv-cert\") pod \"olm-operator-6b444d44fb-k5qxt\" (UID: \"f42f8303-dc13-4de1-a5ab-a96a1606c0b8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.433152 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/526d70c0-aa70-47f7-9daf-60c0e78d8dc2-proxy-tls\") pod \"machine-config-operator-74547568cd-kkq7l\" (UID: \"526d70c0-aa70-47f7-9daf-60c0e78d8dc2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.433191 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f42f8303-dc13-4de1-a5ab-a96a1606c0b8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k5qxt\" (UID: \"f42f8303-dc13-4de1-a5ab-a96a1606c0b8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.433245 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.433302 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-trusted-ca\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.433381 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-registry-certificates\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.433442 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.433489 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/526d70c0-aa70-47f7-9daf-60c0e78d8dc2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kkq7l\" (UID: \"526d70c0-aa70-47f7-9daf-60c0e78d8dc2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.433536 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kbcq\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-kube-api-access-9kbcq\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: E0103 03:17:05.434499 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:05.933592102 +0000 UTC m=+145.783482407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.434596 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-bound-sa-token\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.434639 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-registry-tls\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.434821 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.434987 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb934f19-6c2c-4b42-a0c7-829ec54c20ef-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gksxd\" (UID: \"bb934f19-6c2c-4b42-a0c7-829ec54c20ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.435420 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb934f19-6c2c-4b42-a0c7-829ec54c20ef-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gksxd\" (UID: \"bb934f19-6c2c-4b42-a0c7-829ec54c20ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.435633 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs4gm\" (UniqueName: \"kubernetes.io/projected/526d70c0-aa70-47f7-9daf-60c0e78d8dc2-kube-api-access-zs4gm\") pod \"machine-config-operator-74547568cd-kkq7l\" (UID: \"526d70c0-aa70-47f7-9daf-60c0e78d8dc2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.435680 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgv2n\" (UniqueName: \"kubernetes.io/projected/f42f8303-dc13-4de1-a5ab-a96a1606c0b8-kube-api-access-sgv2n\") pod \"olm-operator-6b444d44fb-k5qxt\" (UID: \"f42f8303-dc13-4de1-a5ab-a96a1606c0b8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.435716 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb934f19-6c2c-4b42-a0c7-829ec54c20ef-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gksxd\" (UID: \"bb934f19-6c2c-4b42-a0c7-829ec54c20ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.435734 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/526d70c0-aa70-47f7-9daf-60c0e78d8dc2-images\") pod \"machine-config-operator-74547568cd-kkq7l\" (UID: \"526d70c0-aa70-47f7-9daf-60c0e78d8dc2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: W0103 03:17:05.444228 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8cd430_5229_4772_8c83_9fbdbeaf54de.slice/crio-5fb99619bfbbdabcf6413d2dd121e96f099962acafca451f14545b5b9109236c WatchSource:0}: Error finding container 5fb99619bfbbdabcf6413d2dd121e96f099962acafca451f14545b5b9109236c: Status 404 returned error can't find the container with id 5fb99619bfbbdabcf6413d2dd121e96f099962acafca451f14545b5b9109236c Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.476441 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.478523 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.511053 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j9fzb"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.514546 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.536550 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:05 crc kubenswrapper[4746]: E0103 03:17:05.537125 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.037096489 +0000 UTC m=+145.886986794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537231 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f42f8303-dc13-4de1-a5ab-a96a1606c0b8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k5qxt\" (UID: \"f42f8303-dc13-4de1-a5ab-a96a1606c0b8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537273 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/078fac5f-8d76-4f20-9857-18b74a4ebab0-signing-cabundle\") pod \"service-ca-9c57cc56f-wwvt9\" (UID: \"078fac5f-8d76-4f20-9857-18b74a4ebab0\") " pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537363 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwzgt\" (UniqueName: \"kubernetes.io/projected/92be181f-28e2-4b83-a7de-669db662b52c-kube-api-access-pwzgt\") pod \"dns-default-t84km\" (UID: \"92be181f-28e2-4b83-a7de-669db662b52c\") " pod="openshift-dns/dns-default-t84km" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537407 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537430 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-trusted-ca\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537454 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkq2n\" (UniqueName: \"kubernetes.io/projected/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-kube-api-access-vkq2n\") pod \"marketplace-operator-79b997595-mpsxq\" (UID: \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537540 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdlmr\" (UniqueName: \"kubernetes.io/projected/5bebc5d9-35a7-4154-873e-65d60f85f9b6-kube-api-access-vdlmr\") pod \"control-plane-machine-set-operator-78cbb6b69f-bj7mx\" (UID: \"5bebc5d9-35a7-4154-873e-65d60f85f9b6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537565 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-registry-certificates\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537748 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537873 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/078fac5f-8d76-4f20-9857-18b74a4ebab0-signing-key\") pod \"service-ca-9c57cc56f-wwvt9\" (UID: \"078fac5f-8d76-4f20-9857-18b74a4ebab0\") " pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537897 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27-srv-cert\") pod \"catalog-operator-68c6474976-5kwqt\" (UID: \"fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537949 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/526d70c0-aa70-47f7-9daf-60c0e78d8dc2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kkq7l\" (UID: \"526d70c0-aa70-47f7-9daf-60c0e78d8dc2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538021 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15a9ed2e-64d7-4917-a5fc-857b75246dd7-secret-volume\") pod \"collect-profiles-29456835-bvq5m\" (UID: \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538054 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0da58a9-1a93-439c-8f83-8eba0c4a9961-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rq6g4\" (UID: \"f0da58a9-1a93-439c-8f83-8eba0c4a9961\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538083 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/99d5586b-555b-4f25-8e1c-c329dffc92fc-webhook-cert\") pod \"packageserver-d55dfcdfc-gr5gr\" (UID: \"99d5586b-555b-4f25-8e1c-c329dffc92fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538108 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kbcq\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-kube-api-access-9kbcq\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538130 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e7ad5c3-05bf-4244-a5c9-e218138c0ceb-config\") pod \"service-ca-operator-777779d784-kcq6f\" (UID: \"0e7ad5c3-05bf-4244-a5c9-e218138c0ceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538152 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/99d5586b-555b-4f25-8e1c-c329dffc92fc-apiservice-cert\") pod \"packageserver-d55dfcdfc-gr5gr\" (UID: \"99d5586b-555b-4f25-8e1c-c329dffc92fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538207 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx52m\" (UniqueName: \"kubernetes.io/projected/f3150af1-742e-4b09-afe0-a819ddedc864-kube-api-access-qx52m\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538393 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-plugins-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538454 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-service-ca-bundle\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538496 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0a7bfb36-9f7e-4930-9e71-0173504b7ae6-certs\") pod \"machine-config-server-lkd2p\" (UID: \"0a7bfb36-9f7e-4930-9e71-0173504b7ae6\") " pod="openshift-machine-config-operator/machine-config-server-lkd2p" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538589 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-bound-sa-token\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538619 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5bebc5d9-35a7-4154-873e-65d60f85f9b6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bj7mx\" (UID: \"5bebc5d9-35a7-4154-873e-65d60f85f9b6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538637 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-default-certificate\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538704 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92be181f-28e2-4b83-a7de-669db662b52c-config-volume\") pod \"dns-default-t84km\" (UID: \"92be181f-28e2-4b83-a7de-669db662b52c\") " pod="openshift-dns/dns-default-t84km" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538737 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0da58a9-1a93-439c-8f83-8eba0c4a9961-proxy-tls\") pod \"machine-config-controller-84d6567774-rq6g4\" (UID: \"f0da58a9-1a93-439c-8f83-8eba0c4a9961\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538932 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c4jb\" (UniqueName: \"kubernetes.io/projected/078fac5f-8d76-4f20-9857-18b74a4ebab0-kube-api-access-9c4jb\") pod \"service-ca-9c57cc56f-wwvt9\" (UID: \"078fac5f-8d76-4f20-9857-18b74a4ebab0\") " pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.538964 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27-profile-collector-cert\") pod \"catalog-operator-68c6474976-5kwqt\" (UID: \"fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.539036 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e7ad5c3-05bf-4244-a5c9-e218138c0ceb-serving-cert\") pod \"service-ca-operator-777779d784-kcq6f\" (UID: \"0e7ad5c3-05bf-4244-a5c9-e218138c0ceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.539134 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-registry-tls\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.539209 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.539246 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jllcf\" (UniqueName: \"kubernetes.io/projected/0e7ad5c3-05bf-4244-a5c9-e218138c0ceb-kube-api-access-jllcf\") pod \"service-ca-operator-777779d784-kcq6f\" (UID: \"0e7ad5c3-05bf-4244-a5c9-e218138c0ceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.539267 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-socket-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.539538 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8zlc\" (UniqueName: \"kubernetes.io/projected/15a9ed2e-64d7-4917-a5fc-857b75246dd7-kube-api-access-c8zlc\") pod \"collect-profiles-29456835-bvq5m\" (UID: \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:05 crc kubenswrapper[4746]: E0103 03:17:05.539958 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.039945517 +0000 UTC m=+145.889835822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.542615 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-registry-certificates\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.537227 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.546879 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f42f8303-dc13-4de1-a5ab-a96a1606c0b8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-k5qxt\" (UID: \"f42f8303-dc13-4de1-a5ab-a96a1606c0b8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.547034 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mpsxq\" (UID: \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.547066 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15a9ed2e-64d7-4917-a5fc-857b75246dd7-config-volume\") pod \"collect-profiles-29456835-bvq5m\" (UID: \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.547098 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6hvd\" (UniqueName: \"kubernetes.io/projected/c3185930-86ef-4dff-b4ec-0d60800fb76e-kube-api-access-m6hvd\") pod \"package-server-manager-789f6589d5-4rdkk\" (UID: \"c3185930-86ef-4dff-b4ec-0d60800fb76e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.547124 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhs8k\" (UniqueName: \"kubernetes.io/projected/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-kube-api-access-bhs8k\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.547149 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k728w\" (UniqueName: \"kubernetes.io/projected/99d5586b-555b-4f25-8e1c-c329dffc92fc-kube-api-access-k728w\") pod \"packageserver-d55dfcdfc-gr5gr\" (UID: \"99d5586b-555b-4f25-8e1c-c329dffc92fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.547546 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.548189 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-trusted-ca\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.550751 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb934f19-6c2c-4b42-a0c7-829ec54c20ef-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gksxd\" (UID: \"bb934f19-6c2c-4b42-a0c7-829ec54c20ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.551894 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.552062 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.555398 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb934f19-6c2c-4b42-a0c7-829ec54c20ef-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gksxd\" (UID: \"bb934f19-6c2c-4b42-a0c7-829ec54c20ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.555948 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-stats-auth\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.555998 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-csi-data-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.556259 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb934f19-6c2c-4b42-a0c7-829ec54c20ef-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gksxd\" (UID: \"bb934f19-6c2c-4b42-a0c7-829ec54c20ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.556696 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5bc5\" (UniqueName: \"kubernetes.io/projected/fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27-kube-api-access-q5bc5\") pod \"catalog-operator-68c6474976-5kwqt\" (UID: \"fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.556808 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bda2c6fa-8070-438b-b801-0ac1a30822e8-cert\") pod \"ingress-canary-24ldq\" (UID: \"bda2c6fa-8070-438b-b801-0ac1a30822e8\") " pod="openshift-ingress-canary/ingress-canary-24ldq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.556898 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs4gm\" (UniqueName: \"kubernetes.io/projected/526d70c0-aa70-47f7-9daf-60c0e78d8dc2-kube-api-access-zs4gm\") pod \"machine-config-operator-74547568cd-kkq7l\" (UID: \"526d70c0-aa70-47f7-9daf-60c0e78d8dc2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.556986 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0a7bfb36-9f7e-4930-9e71-0173504b7ae6-node-bootstrap-token\") pod \"machine-config-server-lkd2p\" (UID: \"0a7bfb36-9f7e-4930-9e71-0173504b7ae6\") " pod="openshift-machine-config-operator/machine-config-server-lkd2p" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.557496 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/526d70c0-aa70-47f7-9daf-60c0e78d8dc2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kkq7l\" (UID: \"526d70c0-aa70-47f7-9daf-60c0e78d8dc2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.557090 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr8wc\" (UniqueName: \"kubernetes.io/projected/0a7bfb36-9f7e-4930-9e71-0173504b7ae6-kube-api-access-pr8wc\") pod \"machine-config-server-lkd2p\" (UID: \"0a7bfb36-9f7e-4930-9e71-0173504b7ae6\") " pod="openshift-machine-config-operator/machine-config-server-lkd2p" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.558553 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgv2n\" (UniqueName: \"kubernetes.io/projected/f42f8303-dc13-4de1-a5ab-a96a1606c0b8-kube-api-access-sgv2n\") pod \"olm-operator-6b444d44fb-k5qxt\" (UID: \"f42f8303-dc13-4de1-a5ab-a96a1606c0b8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.558598 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/99d5586b-555b-4f25-8e1c-c329dffc92fc-tmpfs\") pod \"packageserver-d55dfcdfc-gr5gr\" (UID: \"99d5586b-555b-4f25-8e1c-c329dffc92fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.558648 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x8cb\" (UniqueName: \"kubernetes.io/projected/bda2c6fa-8070-438b-b801-0ac1a30822e8-kube-api-access-8x8cb\") pod \"ingress-canary-24ldq\" (UID: \"bda2c6fa-8070-438b-b801-0ac1a30822e8\") " pod="openshift-ingress-canary/ingress-canary-24ldq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.559360 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92be181f-28e2-4b83-a7de-669db662b52c-metrics-tls\") pod \"dns-default-t84km\" (UID: \"92be181f-28e2-4b83-a7de-669db662b52c\") " pod="openshift-dns/dns-default-t84km" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.559400 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-registration-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.564289 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fws24"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.584409 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb934f19-6c2c-4b42-a0c7-829ec54c20ef-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gksxd\" (UID: \"bb934f19-6c2c-4b42-a0c7-829ec54c20ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.584526 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mpsxq\" (UID: \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.584551 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-metrics-certs\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.585203 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.585322 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-khnmh" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.586728 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-registry-tls\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.587556 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/526d70c0-aa70-47f7-9daf-60c0e78d8dc2-images\") pod \"machine-config-operator-74547568cd-kkq7l\" (UID: \"526d70c0-aa70-47f7-9daf-60c0e78d8dc2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.588163 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3185930-86ef-4dff-b4ec-0d60800fb76e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4rdkk\" (UID: \"c3185930-86ef-4dff-b4ec-0d60800fb76e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.588630 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-mountpoint-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.589288 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/526d70c0-aa70-47f7-9daf-60c0e78d8dc2-images\") pod \"machine-config-operator-74547568cd-kkq7l\" (UID: \"526d70c0-aa70-47f7-9daf-60c0e78d8dc2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.589376 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.589907 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/526d70c0-aa70-47f7-9daf-60c0e78d8dc2-proxy-tls\") pod \"machine-config-operator-74547568cd-kkq7l\" (UID: \"526d70c0-aa70-47f7-9daf-60c0e78d8dc2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.594121 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f42f8303-dc13-4de1-a5ab-a96a1606c0b8-srv-cert\") pod \"olm-operator-6b444d44fb-k5qxt\" (UID: \"f42f8303-dc13-4de1-a5ab-a96a1606c0b8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.594183 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mf8z\" (UniqueName: \"kubernetes.io/projected/f0da58a9-1a93-439c-8f83-8eba0c4a9961-kube-api-access-9mf8z\") pod \"machine-config-controller-84d6567774-rq6g4\" (UID: \"f0da58a9-1a93-439c-8f83-8eba0c4a9961\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.593024 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-bound-sa-token\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.595193 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-58c52"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.595437 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/526d70c0-aa70-47f7-9daf-60c0e78d8dc2-proxy-tls\") pod \"machine-config-operator-74547568cd-kkq7l\" (UID: \"526d70c0-aa70-47f7-9daf-60c0e78d8dc2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.604316 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb934f19-6c2c-4b42-a0c7-829ec54c20ef-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gksxd\" (UID: \"bb934f19-6c2c-4b42-a0c7-829ec54c20ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.610572 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fcxcc"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.617935 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f42f8303-dc13-4de1-a5ab-a96a1606c0b8-srv-cert\") pod \"olm-operator-6b444d44fb-k5qxt\" (UID: \"f42f8303-dc13-4de1-a5ab-a96a1606c0b8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.621288 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs4gm\" (UniqueName: \"kubernetes.io/projected/526d70c0-aa70-47f7-9daf-60c0e78d8dc2-kube-api-access-zs4gm\") pod \"machine-config-operator-74547568cd-kkq7l\" (UID: \"526d70c0-aa70-47f7-9daf-60c0e78d8dc2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.638182 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgv2n\" (UniqueName: \"kubernetes.io/projected/f42f8303-dc13-4de1-a5ab-a96a1606c0b8-kube-api-access-sgv2n\") pod \"olm-operator-6b444d44fb-k5qxt\" (UID: \"f42f8303-dc13-4de1-a5ab-a96a1606c0b8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.669777 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nfw8x"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.669999 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb934f19-6c2c-4b42-a0c7-829ec54c20ef-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gksxd\" (UID: \"bb934f19-6c2c-4b42-a0c7-829ec54c20ef\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" Jan 03 03:17:05 crc kubenswrapper[4746]: W0103 03:17:05.672722 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1339946_fe37_4d87_b959_fd1349323679.slice/crio-7e8941bfd5b1f8291b9f9d1005f68dc3d0735b7fd30f3f9d9f549f30b5fafe8c WatchSource:0}: Error finding container 7e8941bfd5b1f8291b9f9d1005f68dc3d0735b7fd30f3f9d9f549f30b5fafe8c: Status 404 returned error can't find the container with id 7e8941bfd5b1f8291b9f9d1005f68dc3d0735b7fd30f3f9d9f549f30b5fafe8c Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.680235 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kbcq\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-kube-api-access-9kbcq\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.698861 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699378 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-stats-auth\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: E0103 03:17:05.699507 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.199488155 +0000 UTC m=+146.049378460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699538 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5bc5\" (UniqueName: \"kubernetes.io/projected/fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27-kube-api-access-q5bc5\") pod \"catalog-operator-68c6474976-5kwqt\" (UID: \"fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699561 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-csi-data-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699597 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bda2c6fa-8070-438b-b801-0ac1a30822e8-cert\") pod \"ingress-canary-24ldq\" (UID: \"bda2c6fa-8070-438b-b801-0ac1a30822e8\") " pod="openshift-ingress-canary/ingress-canary-24ldq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699618 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0a7bfb36-9f7e-4930-9e71-0173504b7ae6-node-bootstrap-token\") pod \"machine-config-server-lkd2p\" (UID: \"0a7bfb36-9f7e-4930-9e71-0173504b7ae6\") " pod="openshift-machine-config-operator/machine-config-server-lkd2p" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699643 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr8wc\" (UniqueName: \"kubernetes.io/projected/0a7bfb36-9f7e-4930-9e71-0173504b7ae6-kube-api-access-pr8wc\") pod \"machine-config-server-lkd2p\" (UID: \"0a7bfb36-9f7e-4930-9e71-0173504b7ae6\") " pod="openshift-machine-config-operator/machine-config-server-lkd2p" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699689 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/99d5586b-555b-4f25-8e1c-c329dffc92fc-tmpfs\") pod \"packageserver-d55dfcdfc-gr5gr\" (UID: \"99d5586b-555b-4f25-8e1c-c329dffc92fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699707 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x8cb\" (UniqueName: \"kubernetes.io/projected/bda2c6fa-8070-438b-b801-0ac1a30822e8-kube-api-access-8x8cb\") pod \"ingress-canary-24ldq\" (UID: \"bda2c6fa-8070-438b-b801-0ac1a30822e8\") " pod="openshift-ingress-canary/ingress-canary-24ldq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699740 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92be181f-28e2-4b83-a7de-669db662b52c-metrics-tls\") pod \"dns-default-t84km\" (UID: \"92be181f-28e2-4b83-a7de-669db662b52c\") " pod="openshift-dns/dns-default-t84km" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699755 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-registration-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699777 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mpsxq\" (UID: \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699796 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-metrics-certs\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699814 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3185930-86ef-4dff-b4ec-0d60800fb76e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4rdkk\" (UID: \"c3185930-86ef-4dff-b4ec-0d60800fb76e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699833 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-mountpoint-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699853 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mf8z\" (UniqueName: \"kubernetes.io/projected/f0da58a9-1a93-439c-8f83-8eba0c4a9961-kube-api-access-9mf8z\") pod \"machine-config-controller-84d6567774-rq6g4\" (UID: \"f0da58a9-1a93-439c-8f83-8eba0c4a9961\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699886 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/078fac5f-8d76-4f20-9857-18b74a4ebab0-signing-cabundle\") pod \"service-ca-9c57cc56f-wwvt9\" (UID: \"078fac5f-8d76-4f20-9857-18b74a4ebab0\") " pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699903 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwzgt\" (UniqueName: \"kubernetes.io/projected/92be181f-28e2-4b83-a7de-669db662b52c-kube-api-access-pwzgt\") pod \"dns-default-t84km\" (UID: \"92be181f-28e2-4b83-a7de-669db662b52c\") " pod="openshift-dns/dns-default-t84km" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699939 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699957 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkq2n\" (UniqueName: \"kubernetes.io/projected/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-kube-api-access-vkq2n\") pod \"marketplace-operator-79b997595-mpsxq\" (UID: \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.699977 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdlmr\" (UniqueName: \"kubernetes.io/projected/5bebc5d9-35a7-4154-873e-65d60f85f9b6-kube-api-access-vdlmr\") pod \"control-plane-machine-set-operator-78cbb6b69f-bj7mx\" (UID: \"5bebc5d9-35a7-4154-873e-65d60f85f9b6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700002 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/078fac5f-8d76-4f20-9857-18b74a4ebab0-signing-key\") pod \"service-ca-9c57cc56f-wwvt9\" (UID: \"078fac5f-8d76-4f20-9857-18b74a4ebab0\") " pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700019 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27-srv-cert\") pod \"catalog-operator-68c6474976-5kwqt\" (UID: \"fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700038 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15a9ed2e-64d7-4917-a5fc-857b75246dd7-secret-volume\") pod \"collect-profiles-29456835-bvq5m\" (UID: \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700054 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0da58a9-1a93-439c-8f83-8eba0c4a9961-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rq6g4\" (UID: \"f0da58a9-1a93-439c-8f83-8eba0c4a9961\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700071 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/99d5586b-555b-4f25-8e1c-c329dffc92fc-webhook-cert\") pod \"packageserver-d55dfcdfc-gr5gr\" (UID: \"99d5586b-555b-4f25-8e1c-c329dffc92fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700093 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e7ad5c3-05bf-4244-a5c9-e218138c0ceb-config\") pod \"service-ca-operator-777779d784-kcq6f\" (UID: \"0e7ad5c3-05bf-4244-a5c9-e218138c0ceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700110 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/99d5586b-555b-4f25-8e1c-c329dffc92fc-apiservice-cert\") pod \"packageserver-d55dfcdfc-gr5gr\" (UID: \"99d5586b-555b-4f25-8e1c-c329dffc92fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700129 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx52m\" (UniqueName: \"kubernetes.io/projected/f3150af1-742e-4b09-afe0-a819ddedc864-kube-api-access-qx52m\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700145 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-plugins-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700163 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0a7bfb36-9f7e-4930-9e71-0173504b7ae6-certs\") pod \"machine-config-server-lkd2p\" (UID: \"0a7bfb36-9f7e-4930-9e71-0173504b7ae6\") " pod="openshift-machine-config-operator/machine-config-server-lkd2p" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700178 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-service-ca-bundle\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700200 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5bebc5d9-35a7-4154-873e-65d60f85f9b6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bj7mx\" (UID: \"5bebc5d9-35a7-4154-873e-65d60f85f9b6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700213 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-csi-data-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700218 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-default-certificate\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700285 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92be181f-28e2-4b83-a7de-669db662b52c-config-volume\") pod \"dns-default-t84km\" (UID: \"92be181f-28e2-4b83-a7de-669db662b52c\") " pod="openshift-dns/dns-default-t84km" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700330 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0da58a9-1a93-439c-8f83-8eba0c4a9961-proxy-tls\") pod \"machine-config-controller-84d6567774-rq6g4\" (UID: \"f0da58a9-1a93-439c-8f83-8eba0c4a9961\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700358 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c4jb\" (UniqueName: \"kubernetes.io/projected/078fac5f-8d76-4f20-9857-18b74a4ebab0-kube-api-access-9c4jb\") pod \"service-ca-9c57cc56f-wwvt9\" (UID: \"078fac5f-8d76-4f20-9857-18b74a4ebab0\") " pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700378 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27-profile-collector-cert\") pod \"catalog-operator-68c6474976-5kwqt\" (UID: \"fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700400 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e7ad5c3-05bf-4244-a5c9-e218138c0ceb-serving-cert\") pod \"service-ca-operator-777779d784-kcq6f\" (UID: \"0e7ad5c3-05bf-4244-a5c9-e218138c0ceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700432 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jllcf\" (UniqueName: \"kubernetes.io/projected/0e7ad5c3-05bf-4244-a5c9-e218138c0ceb-kube-api-access-jllcf\") pod \"service-ca-operator-777779d784-kcq6f\" (UID: \"0e7ad5c3-05bf-4244-a5c9-e218138c0ceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700448 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-socket-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700499 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8zlc\" (UniqueName: \"kubernetes.io/projected/15a9ed2e-64d7-4917-a5fc-857b75246dd7-kube-api-access-c8zlc\") pod \"collect-profiles-29456835-bvq5m\" (UID: \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700532 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15a9ed2e-64d7-4917-a5fc-857b75246dd7-config-volume\") pod \"collect-profiles-29456835-bvq5m\" (UID: \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700574 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mpsxq\" (UID: \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700591 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhs8k\" (UniqueName: \"kubernetes.io/projected/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-kube-api-access-bhs8k\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700609 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k728w\" (UniqueName: \"kubernetes.io/projected/99d5586b-555b-4f25-8e1c-c329dffc92fc-kube-api-access-k728w\") pod \"packageserver-d55dfcdfc-gr5gr\" (UID: \"99d5586b-555b-4f25-8e1c-c329dffc92fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.700626 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6hvd\" (UniqueName: \"kubernetes.io/projected/c3185930-86ef-4dff-b4ec-0d60800fb76e-kube-api-access-m6hvd\") pod \"package-server-manager-789f6589d5-4rdkk\" (UID: \"c3185930-86ef-4dff-b4ec-0d60800fb76e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.702544 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15a9ed2e-64d7-4917-a5fc-857b75246dd7-config-volume\") pod \"collect-profiles-29456835-bvq5m\" (UID: \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.703013 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-socket-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.705261 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-default-certificate\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.705571 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mpsxq\" (UID: \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.708868 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92be181f-28e2-4b83-a7de-669db662b52c-config-volume\") pod \"dns-default-t84km\" (UID: \"92be181f-28e2-4b83-a7de-669db662b52c\") " pod="openshift-dns/dns-default-t84km" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.709005 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0da58a9-1a93-439c-8f83-8eba0c4a9961-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rq6g4\" (UID: \"f0da58a9-1a93-439c-8f83-8eba0c4a9961\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.709281 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/99d5586b-555b-4f25-8e1c-c329dffc92fc-tmpfs\") pod \"packageserver-d55dfcdfc-gr5gr\" (UID: \"99d5586b-555b-4f25-8e1c-c329dffc92fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.708940 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-stats-auth\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.710560 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-mountpoint-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.710711 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-plugins-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.712152 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-lnzfg"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.712145 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0a7bfb36-9f7e-4930-9e71-0173504b7ae6-node-bootstrap-token\") pod \"machine-config-server-lkd2p\" (UID: \"0a7bfb36-9f7e-4930-9e71-0173504b7ae6\") " pod="openshift-machine-config-operator/machine-config-server-lkd2p" Jan 03 03:17:05 crc kubenswrapper[4746]: E0103 03:17:05.712447 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.212431254 +0000 UTC m=+146.062321559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.713058 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/99d5586b-555b-4f25-8e1c-c329dffc92fc-apiservice-cert\") pod \"packageserver-d55dfcdfc-gr5gr\" (UID: \"99d5586b-555b-4f25-8e1c-c329dffc92fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.713466 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-service-ca-bundle\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.713606 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3150af1-742e-4b09-afe0-a819ddedc864-registration-dir\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.714998 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27-srv-cert\") pod \"catalog-operator-68c6474976-5kwqt\" (UID: \"fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.715379 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15a9ed2e-64d7-4917-a5fc-857b75246dd7-secret-volume\") pod \"collect-profiles-29456835-bvq5m\" (UID: \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.716552 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/078fac5f-8d76-4f20-9857-18b74a4ebab0-signing-key\") pod \"service-ca-9c57cc56f-wwvt9\" (UID: \"078fac5f-8d76-4f20-9857-18b74a4ebab0\") " pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.717067 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0da58a9-1a93-439c-8f83-8eba0c4a9961-proxy-tls\") pod \"machine-config-controller-84d6567774-rq6g4\" (UID: \"f0da58a9-1a93-439c-8f83-8eba0c4a9961\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.717954 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e7ad5c3-05bf-4244-a5c9-e218138c0ceb-serving-cert\") pod \"service-ca-operator-777779d784-kcq6f\" (UID: \"0e7ad5c3-05bf-4244-a5c9-e218138c0ceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.717999 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bda2c6fa-8070-438b-b801-0ac1a30822e8-cert\") pod \"ingress-canary-24ldq\" (UID: \"bda2c6fa-8070-438b-b801-0ac1a30822e8\") " pod="openshift-ingress-canary/ingress-canary-24ldq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.718344 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92be181f-28e2-4b83-a7de-669db662b52c-metrics-tls\") pod \"dns-default-t84km\" (UID: \"92be181f-28e2-4b83-a7de-669db662b52c\") " pod="openshift-dns/dns-default-t84km" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.718539 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/078fac5f-8d76-4f20-9857-18b74a4ebab0-signing-cabundle\") pod \"service-ca-9c57cc56f-wwvt9\" (UID: \"078fac5f-8d76-4f20-9857-18b74a4ebab0\") " pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.718717 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-metrics-certs\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.718826 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e7ad5c3-05bf-4244-a5c9-e218138c0ceb-config\") pod \"service-ca-operator-777779d784-kcq6f\" (UID: \"0e7ad5c3-05bf-4244-a5c9-e218138c0ceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.719011 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/99d5586b-555b-4f25-8e1c-c329dffc92fc-webhook-cert\") pod \"packageserver-d55dfcdfc-gr5gr\" (UID: \"99d5586b-555b-4f25-8e1c-c329dffc92fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.719333 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0a7bfb36-9f7e-4930-9e71-0173504b7ae6-certs\") pod \"machine-config-server-lkd2p\" (UID: \"0a7bfb36-9f7e-4930-9e71-0173504b7ae6\") " pod="openshift-machine-config-operator/machine-config-server-lkd2p" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.729540 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3185930-86ef-4dff-b4ec-0d60800fb76e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4rdkk\" (UID: \"c3185930-86ef-4dff-b4ec-0d60800fb76e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.731601 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27-profile-collector-cert\") pod \"catalog-operator-68c6474976-5kwqt\" (UID: \"fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.737891 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5bebc5d9-35a7-4154-873e-65d60f85f9b6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bj7mx\" (UID: \"5bebc5d9-35a7-4154-873e-65d60f85f9b6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.741293 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mpsxq\" (UID: \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.761372 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x8cb\" (UniqueName: \"kubernetes.io/projected/bda2c6fa-8070-438b-b801-0ac1a30822e8-kube-api-access-8x8cb\") pod \"ingress-canary-24ldq\" (UID: \"bda2c6fa-8070-438b-b801-0ac1a30822e8\") " pod="openshift-ingress-canary/ingress-canary-24ldq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.770414 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5bc5\" (UniqueName: \"kubernetes.io/projected/fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27-kube-api-access-q5bc5\") pod \"catalog-operator-68c6474976-5kwqt\" (UID: \"fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.779933 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6hvd\" (UniqueName: \"kubernetes.io/projected/c3185930-86ef-4dff-b4ec-0d60800fb76e-kube-api-access-m6hvd\") pod \"package-server-manager-789f6589d5-4rdkk\" (UID: \"c3185930-86ef-4dff-b4ec-0d60800fb76e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.795374 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jllcf\" (UniqueName: \"kubernetes.io/projected/0e7ad5c3-05bf-4244-a5c9-e218138c0ceb-kube-api-access-jllcf\") pod \"service-ca-operator-777779d784-kcq6f\" (UID: \"0e7ad5c3-05bf-4244-a5c9-e218138c0ceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" Jan 03 03:17:05 crc kubenswrapper[4746]: E0103 03:17:05.801747 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.301711551 +0000 UTC m=+146.151601856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.801411 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.801956 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-24ldq" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.802244 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:05 crc kubenswrapper[4746]: E0103 03:17:05.802668 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.302629283 +0000 UTC m=+146.152519578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.824400 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8zlc\" (UniqueName: \"kubernetes.io/projected/15a9ed2e-64d7-4917-a5fc-857b75246dd7-kube-api-access-c8zlc\") pod \"collect-profiles-29456835-bvq5m\" (UID: \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.858804 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdlmr\" (UniqueName: \"kubernetes.io/projected/5bebc5d9-35a7-4154-873e-65d60f85f9b6-kube-api-access-vdlmr\") pod \"control-plane-machine-set-operator-78cbb6b69f-bj7mx\" (UID: \"5bebc5d9-35a7-4154-873e-65d60f85f9b6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.863018 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.872740 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhs8k\" (UniqueName: \"kubernetes.io/projected/caccb049-c5ee-4d59-8b3e-1c4f54b81f10-kube-api-access-bhs8k\") pod \"router-default-5444994796-k57gl\" (UID: \"caccb049-c5ee-4d59-8b3e-1c4f54b81f10\") " pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.898680 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k728w\" (UniqueName: \"kubernetes.io/projected/99d5586b-555b-4f25-8e1c-c329dffc92fc-kube-api-access-k728w\") pod \"packageserver-d55dfcdfc-gr5gr\" (UID: \"99d5586b-555b-4f25-8e1c-c329dffc92fc\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.904722 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.907819 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:05 crc kubenswrapper[4746]: E0103 03:17:05.908438 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.408421854 +0000 UTC m=+146.258312159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.908829 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-js77f"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.914095 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.923957 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr8wc\" (UniqueName: \"kubernetes.io/projected/0a7bfb36-9f7e-4930-9e71-0173504b7ae6-kube-api-access-pr8wc\") pod \"machine-config-server-lkd2p\" (UID: \"0a7bfb36-9f7e-4930-9e71-0173504b7ae6\") " pod="openshift-machine-config-operator/machine-config-server-lkd2p" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.939838 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.940553 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.947758 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c4jb\" (UniqueName: \"kubernetes.io/projected/078fac5f-8d76-4f20-9857-18b74a4ebab0-kube-api-access-9c4jb\") pod \"service-ca-9c57cc56f-wwvt9\" (UID: \"078fac5f-8d76-4f20-9857-18b74a4ebab0\") " pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.948161 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.955130 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.960514 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx52m\" (UniqueName: \"kubernetes.io/projected/f3150af1-742e-4b09-afe0-a819ddedc864-kube-api-access-qx52m\") pod \"csi-hostpathplugin-xgssz\" (UID: \"f3150af1-742e-4b09-afe0-a819ddedc864\") " pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.973838 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwzgt\" (UniqueName: \"kubernetes.io/projected/92be181f-28e2-4b83-a7de-669db662b52c-kube-api-access-pwzgt\") pod \"dns-default-t84km\" (UID: \"92be181f-28e2-4b83-a7de-669db662b52c\") " pod="openshift-dns/dns-default-t84km" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.988285 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.988759 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.991235 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl"] Jan 03 03:17:05 crc kubenswrapper[4746]: I0103 03:17:05.997600 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mf8z\" (UniqueName: \"kubernetes.io/projected/f0da58a9-1a93-439c-8f83-8eba0c4a9961-kube-api-access-9mf8z\") pod \"machine-config-controller-84d6567774-rq6g4\" (UID: \"f0da58a9-1a93-439c-8f83-8eba0c4a9961\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.005984 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.015518 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.015870 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.515858115 +0000 UTC m=+146.365748420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.020578 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.039872 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.042107 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc"] Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.054290 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lkd2p" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.069215 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-t84km" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.080153 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkq2n\" (UniqueName: \"kubernetes.io/projected/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-kube-api-access-vkq2n\") pod \"marketplace-operator-79b997595-mpsxq\" (UID: \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\") " pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.092501 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xgssz" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.094494 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5"] Jan 03 03:17:06 crc kubenswrapper[4746]: W0103 03:17:06.096979 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff7b4792_fd79_4c60_bafa_c9f05f0e0deb.slice/crio-d6149c31d232533ae595a9b4d4e3d895bf5c2d18a2cbb4ac1335b52cbb2c4e49 WatchSource:0}: Error finding container d6149c31d232533ae595a9b4d4e3d895bf5c2d18a2cbb4ac1335b52cbb2c4e49: Status 404 returned error can't find the container with id d6149c31d232533ae595a9b4d4e3d895bf5c2d18a2cbb4ac1335b52cbb2c4e49 Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.123167 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.123461 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.623437669 +0000 UTC m=+146.473327974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.123551 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.124420 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.624412152 +0000 UTC m=+146.474302457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.224036 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.224592 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.227070 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.727034008 +0000 UTC m=+146.576924313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.229260 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.238265 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.738246826 +0000 UTC m=+146.588137131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.269964 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.337757 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.347056 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.847012999 +0000 UTC m=+146.696903304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.347157 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.347861 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.847851619 +0000 UTC m=+146.697741924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.387846 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" event={"ID":"844f9e49-a60f-445f-a3a2-c92bb3800691","Type":"ContainerStarted","Data":"4502ee92825102b8ba633cc572ec867e2df45c77fdf6450841dab1686b3633ae"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.391387 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" event={"ID":"9d4e4b7f-a115-44f6-93d2-4649b99340c3","Type":"ContainerStarted","Data":"cff0381033be9d8420248bc5b0e3a80c800fb8cff3c677a9950418357fa4082e"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.391751 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.392886 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" event={"ID":"ff7b4792-fd79-4c60-bafa-c9f05f0e0deb","Type":"ContainerStarted","Data":"d6149c31d232533ae595a9b4d4e3d895bf5c2d18a2cbb4ac1335b52cbb2c4e49"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.394434 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" event={"ID":"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8","Type":"ContainerStarted","Data":"fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.394748 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.399045 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" event={"ID":"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08","Type":"ContainerStarted","Data":"f9fd8a83d535d26360e53c573192810ac8acdbc8ec0d37ec231841001a6e95ca"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.418193 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" event={"ID":"7a51c938-dfaf-4222-afb6-0cd79e445537","Type":"ContainerStarted","Data":"edb85ec8c802917c8baddd2d177705f93ccfb8fdd29847f0dd8c25ba3d89ede3"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.418241 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" event={"ID":"7a51c938-dfaf-4222-afb6-0cd79e445537","Type":"ContainerStarted","Data":"41d442c5648df8ab279b9773bb8011198ebbc5f9469dcdad384b397a10f3cbb7"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.424013 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fws24" event={"ID":"e0f102bc-480f-4c8f-b3e3-7afa141e912c","Type":"ContainerStarted","Data":"74420a394e4be2e3ea3951b8fa427705aae9f2c840e5e0ca2df21a554a5894ce"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.432367 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" event={"ID":"27d52d81-bec6-495c-b080-d3244284d228","Type":"ContainerStarted","Data":"0c9a19070f070d21a89b2de15e4e4736c79dda0b518dd3f212e486ff8589de4e"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.433788 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" event={"ID":"45e8e97f-f055-4a33-94fa-687aa5893d06","Type":"ContainerStarted","Data":"d6c7b5e041edae07a9e94ed0d4d4e8fe1d7555b01f6bfeaf565df578f49d26f1"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.434463 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" event={"ID":"8fca306c-5880-4915-8d7e-c4e9df65d59e","Type":"ContainerStarted","Data":"5e43bde5d8691847d966ee9ac64535f43164b0c1d885d27e5385a4bdba5e24d5"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.435371 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-j2jgm" event={"ID":"f93d60e3-b792-4b40-88fd-b979e91021f3","Type":"ContainerStarted","Data":"fb2d4a7879c39554a192141228dac8786e8d977007b50a290e9b62da268d3498"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.435890 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-j2jgm" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.437178 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-j2jgm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.437236 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j2jgm" podUID="f93d60e3-b792-4b40-88fd-b979e91021f3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.438350 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lnzfg" event={"ID":"fc99dd78-6470-4b4c-8db2-d01982e37009","Type":"ContainerStarted","Data":"ee77b9943e1df47d0fc91d81b8120973ba741dee560712ad61a2f5e7860b2f7a"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.440747 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" event={"ID":"1edd4480-eb8f-4841-b34b-df768497de26","Type":"ContainerStarted","Data":"b710bbb2f9870668f8120a5434bc0afefaa85830700d1a0139d50b8fe12ab2eb"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.440805 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" event={"ID":"1edd4480-eb8f-4841-b34b-df768497de26","Type":"ContainerStarted","Data":"989f4806742f93c119d0fb19004267517346d841d0cedc20e7482e9b5af1b180"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.441643 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" event={"ID":"6d8cd430-5229-4772-8c83-9fbdbeaf54de","Type":"ContainerStarted","Data":"5fb99619bfbbdabcf6413d2dd121e96f099962acafca451f14545b5b9109236c"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.442590 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" event={"ID":"26985b93-c203-432e-b302-9a73c40803e8","Type":"ContainerStarted","Data":"247d4b03513d7bba63c2e59ba80404d1f7d35186ee91fabc2ca661c89bc74966"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.444422 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nfw8x" event={"ID":"84efb631-0927-4470-9a6c-9af70fbdb9a0","Type":"ContainerStarted","Data":"94ba6e436042da4be6cc605ac2cd5467fe7961c1b68426f936a7d1d0507770dc"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.446345 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fcxcc" event={"ID":"d1339946-fe37-4d87-b959-fd1349323679","Type":"ContainerStarted","Data":"7e8941bfd5b1f8291b9f9d1005f68dc3d0735b7fd30f3f9d9f549f30b5fafe8c"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.448140 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.448707 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.948683832 +0000 UTC m=+146.798574137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.449093 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.449259 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.449507 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:06.949493521 +0000 UTC m=+146.799383826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.450343 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" event={"ID":"802e491b-8f4e-4cc7-b6df-756478ebbe1e","Type":"ContainerStarted","Data":"7f65f911af006f2bccd908cb8ba5a175476ba545b550a7990407606d1b889379"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.453320 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx" event={"ID":"42098287-d6c9-4d15-a33b-2dbf74558a73","Type":"ContainerStarted","Data":"7f6a9bb61af5eb53f54230e89e049acccbc1f104b578bf3cbb83c47f716fff70"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.453386 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx" event={"ID":"42098287-d6c9-4d15-a33b-2dbf74558a73","Type":"ContainerStarted","Data":"3d1e30c48318901f9a88388e6dc4daedaaaef2f764742eb2a2a6d3ce74b107cc"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.453926 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" event={"ID":"d9864171-c848-4905-96fd-232f0f0df7f9","Type":"ContainerStarted","Data":"3b64da222331bfb45a150ab5a7f4752312e98bab8be479dd5147b8a19ca856a9"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.455795 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" event={"ID":"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f","Type":"ContainerStarted","Data":"d325649afa7e491cadaedc60632848e44e37a66ef6d177c39c53afe69d78e81e"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.455878 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" event={"ID":"21795ebc-fb42-4e7d-8f3f-76dcf85ed71f","Type":"ContainerStarted","Data":"df45832763592192d8954de8802f404212b2d93816cd03c78f58d8ce2c43140b"} Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.471359 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.550934 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.551129 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:07.051097662 +0000 UTC m=+146.900987977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.553349 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.553543 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.553616 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.553736 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.554708 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:07.054700719 +0000 UTC m=+146.904591024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.554931 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.567059 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.567945 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.597412 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:17:06 crc kubenswrapper[4746]: W0103 03:17:06.608937 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaccb049_c5ee_4d59_8b3e_1c4f54b81f10.slice/crio-0c02296efe10f11dd2801f68cf440deb60422726667af4cf2377b09b6d4e40ed WatchSource:0}: Error finding container 0c02296efe10f11dd2801f68cf440deb60422726667af4cf2377b09b6d4e40ed: Status 404 returned error can't find the container with id 0c02296efe10f11dd2801f68cf440deb60422726667af4cf2377b09b6d4e40ed Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.654829 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.658783 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:07.158758699 +0000 UTC m=+147.008649004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.691642 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.705396 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.719276 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.758582 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.785506 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:07.285482741 +0000 UTC m=+147.135373046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.859268 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.859633 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:07.359618125 +0000 UTC m=+147.209508430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.902822 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-khnmh"] Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.923980 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-24ldq"] Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.942622 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv"] Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.960595 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:06 crc kubenswrapper[4746]: E0103 03:17:06.961026 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:07.461011421 +0000 UTC m=+147.310901726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:06 crc kubenswrapper[4746]: I0103 03:17:06.976230 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd"] Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.061470 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:07 crc kubenswrapper[4746]: E0103 03:17:07.062359 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:07.562342755 +0000 UTC m=+147.412233060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.163845 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:07 crc kubenswrapper[4746]: E0103 03:17:07.164964 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:07.66494678 +0000 UTC m=+147.514837085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.266120 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:07 crc kubenswrapper[4746]: E0103 03:17:07.266514 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:07.76649876 +0000 UTC m=+147.616389065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.369665 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:07 crc kubenswrapper[4746]: E0103 03:17:07.382709 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:07.88269143 +0000 UTC m=+147.732581735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.474922 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:07 crc kubenswrapper[4746]: E0103 03:17:07.476016 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:07.975990602 +0000 UTC m=+147.825880907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.490344 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-wndv7" podStartSLOduration=128.490319665 podStartE2EDuration="2m8.490319665s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.488562893 +0000 UTC m=+147.338453198" watchObservedRunningTime="2026-01-03 03:17:07.490319665 +0000 UTC m=+147.340209970" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.517398 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m"] Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.522799 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-24ldq" event={"ID":"bda2c6fa-8070-438b-b801-0ac1a30822e8","Type":"ContainerStarted","Data":"7dc50a922e26e8c06121504cb9f76906a9fe19b9896f8ecdfadf58c4555d6257"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.523535 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-d58zr" podStartSLOduration=128.52351681 podStartE2EDuration="2m8.52351681s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.522472415 +0000 UTC m=+147.372362720" watchObservedRunningTime="2026-01-03 03:17:07.52351681 +0000 UTC m=+147.373407115" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.530993 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lkd2p" event={"ID":"0a7bfb36-9f7e-4930-9e71-0173504b7ae6","Type":"ContainerStarted","Data":"99f4508fbb95ab1e7b5fdf510a3ad9056e9906bed2070d813dc7dc706f26c0bd"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.548449 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" event={"ID":"bb934f19-6c2c-4b42-a0c7-829ec54c20ef","Type":"ContainerStarted","Data":"7417ae7c6e229873a9b69659330d735ac9aabbf341c6b9a6208a58c7ab6179e9"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.580210 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.580324 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" podStartSLOduration=128.580310069 podStartE2EDuration="2m8.580310069s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.542208427 +0000 UTC m=+147.392098732" watchObservedRunningTime="2026-01-03 03:17:07.580310069 +0000 UTC m=+147.430200374" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.580473 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bzztq" podStartSLOduration=128.580469202 podStartE2EDuration="2m8.580469202s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.580179435 +0000 UTC m=+147.430069740" watchObservedRunningTime="2026-01-03 03:17:07.580469202 +0000 UTC m=+147.430359497" Jan 03 03:17:07 crc kubenswrapper[4746]: E0103 03:17:07.580528 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:08.080515324 +0000 UTC m=+147.930405629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.599063 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.599103 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nfw8x" event={"ID":"84efb631-0927-4470-9a6c-9af70fbdb9a0","Type":"ContainerStarted","Data":"10771e020cfe9ac221a57fadec965cac04f69284bab63bb0f9b85c305ff3c2a8"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.599124 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fcxcc" event={"ID":"d1339946-fe37-4d87-b959-fd1349323679","Type":"ContainerStarted","Data":"edd9e9f8f839d5d2fbed01127f245f9ca616e23439b43ae78282d3b58120f2f1"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.599366 4746 patch_prober.go:28] interesting pod/console-operator-58897d9998-fcxcc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.599408 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fcxcc" podUID="d1339946-fe37-4d87-b959-fd1349323679" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/readyz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.611821 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" podStartSLOduration=128.611796082 podStartE2EDuration="2m8.611796082s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.609369764 +0000 UTC m=+147.459260069" watchObservedRunningTime="2026-01-03 03:17:07.611796082 +0000 UTC m=+147.461686387" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.620971 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx" event={"ID":"42098287-d6c9-4d15-a33b-2dbf74558a73","Type":"ContainerStarted","Data":"bca7160507ab5e06e182b0918bb8d15c816d4b39c2f6c46d652c2fb8f1b48830"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.624846 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" event={"ID":"6d8cd430-5229-4772-8c83-9fbdbeaf54de","Type":"ContainerStarted","Data":"0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.626022 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.629335 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k57gl" event={"ID":"caccb049-c5ee-4d59-8b3e-1c4f54b81f10","Type":"ContainerStarted","Data":"0c02296efe10f11dd2801f68cf440deb60422726667af4cf2377b09b6d4e40ed"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.638814 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-j2jgm" podStartSLOduration=128.638791798 podStartE2EDuration="2m8.638791798s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.636480873 +0000 UTC m=+147.486371178" watchObservedRunningTime="2026-01-03 03:17:07.638791798 +0000 UTC m=+147.488682103" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.639578 4746 generic.go:334] "Generic (PLEG): container finished" podID="27d52d81-bec6-495c-b080-d3244284d228" containerID="7ed288a2c87b8ca25db01c62ef20ef89fca40f01c2d9494075c97e6263d44a82" exitCode=0 Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.639630 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" event={"ID":"27d52d81-bec6-495c-b080-d3244284d228","Type":"ContainerDied","Data":"7ed288a2c87b8ca25db01c62ef20ef89fca40f01c2d9494075c97e6263d44a82"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.643583 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-khnmh" event={"ID":"d5a89946-a489-411e-8e5d-07e166de5088","Type":"ContainerStarted","Data":"c8a7ee75ad0e194b201121789967d1f82a5ae3a46311a8b7523ea785a6507ffe"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.660810 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fcxcc" podStartSLOduration=128.660791044 podStartE2EDuration="2m8.660791044s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.660507218 +0000 UTC m=+147.510397513" watchObservedRunningTime="2026-01-03 03:17:07.660791044 +0000 UTC m=+147.510681349" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.660812 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" event={"ID":"802e491b-8f4e-4cc7-b6df-756478ebbe1e","Type":"ContainerStarted","Data":"3097720d44f5d1ff3119d376bd17426f2c30b5590f5eb15824a331d42680f959"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.661926 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" event={"ID":"1b5b425d-89ff-4cf3-97c4-7263f3a345cf","Type":"ContainerStarted","Data":"5829e5eff03b114042b0380bc006927c19597c883fe555b1aca3f46be4adba19"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.664583 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" event={"ID":"26985b93-c203-432e-b302-9a73c40803e8","Type":"ContainerStarted","Data":"8f0560d8ca8cc791edfcbee2a6f0228f522c438c364074de3211c8d9edd0ef80"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.674293 4746 generic.go:334] "Generic (PLEG): container finished" podID="7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08" containerID="4291ecab7af030d604f59ce23f454e308d089a5b943e1ecd05510bf343c6aceb" exitCode=0 Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.676456 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" event={"ID":"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08","Type":"ContainerDied","Data":"4291ecab7af030d604f59ce23f454e308d089a5b943e1ecd05510bf343c6aceb"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.687256 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:07 crc kubenswrapper[4746]: E0103 03:17:07.687549 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:08.187520974 +0000 UTC m=+148.037411269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.687899 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:07 crc kubenswrapper[4746]: E0103 03:17:07.688276 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:08.188266902 +0000 UTC m=+148.038157207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.689101 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" event={"ID":"8fca306c-5880-4915-8d7e-c4e9df65d59e","Type":"ContainerStarted","Data":"7796bc2b5564010f54081280ce473987a92fb3bd67fc8fad0d38f89631d2a0a0"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.691819 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fws24" event={"ID":"e0f102bc-480f-4c8f-b3e3-7afa141e912c","Type":"ContainerStarted","Data":"ec14f6d5327857208d29f9c281569f9a888bb94aa8f8efbe40d2cc2bbad5a2a4"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.700686 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" event={"ID":"d9864171-c848-4905-96fd-232f0f0df7f9","Type":"ContainerStarted","Data":"22cc7e9d67f82efb22987e33e393716232ae27212705a9a22d0b4dd215e6e371"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.724756 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" event={"ID":"45e8e97f-f055-4a33-94fa-687aa5893d06","Type":"ContainerStarted","Data":"eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270"} Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.725511 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.725569 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-j2jgm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.725628 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j2jgm" podUID="f93d60e3-b792-4b40-88fd-b979e91021f3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.736352 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dtvtx" podStartSLOduration=128.736328812 podStartE2EDuration="2m8.736328812s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.735593154 +0000 UTC m=+147.585483459" watchObservedRunningTime="2026-01-03 03:17:07.736328812 +0000 UTC m=+147.586219117" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.739899 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" podStartSLOduration=128.739888797 podStartE2EDuration="2m8.739888797s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.698529177 +0000 UTC m=+147.548419482" watchObservedRunningTime="2026-01-03 03:17:07.739888797 +0000 UTC m=+147.589779092" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.748334 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.776872 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-swswl" podStartSLOduration=128.776850541 podStartE2EDuration="2m8.776850541s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.774143587 +0000 UTC m=+147.624033902" watchObservedRunningTime="2026-01-03 03:17:07.776850541 +0000 UTC m=+147.626740836" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.789067 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:07 crc kubenswrapper[4746]: E0103 03:17:07.790250 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:08.290229162 +0000 UTC m=+148.140119467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.852176 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qxk8h" podStartSLOduration=128.852156323 podStartE2EDuration="2m8.852156323s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.819469451 +0000 UTC m=+147.669359756" watchObservedRunningTime="2026-01-03 03:17:07.852156323 +0000 UTC m=+147.702046628" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.862095 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fws24" podStartSLOduration=128.862074021 podStartE2EDuration="2m8.862074021s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.851797085 +0000 UTC m=+147.701687390" watchObservedRunningTime="2026-01-03 03:17:07.862074021 +0000 UTC m=+147.711964326" Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.893149 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:07 crc kubenswrapper[4746]: E0103 03:17:07.893609 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:08.393583875 +0000 UTC m=+148.243474180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:07 crc kubenswrapper[4746]: I0103 03:17:07.905738 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" podStartSLOduration=128.905699335 podStartE2EDuration="2m8.905699335s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:07.905143471 +0000 UTC m=+147.755033776" watchObservedRunningTime="2026-01-03 03:17:07.905699335 +0000 UTC m=+147.755589640" Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.034689 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:08 crc kubenswrapper[4746]: E0103 03:17:08.035414 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:08.535393188 +0000 UTC m=+148.385283493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.047249 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx"] Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.095425 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hxwk5" podStartSLOduration=129.095407454 podStartE2EDuration="2m9.095407454s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:08.093380216 +0000 UTC m=+147.943270521" watchObservedRunningTime="2026-01-03 03:17:08.095407454 +0000 UTC m=+147.945297759" Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.115456 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt"] Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.126256 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr"] Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.136700 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:08 crc kubenswrapper[4746]: E0103 03:17:08.137054 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:08.63704408 +0000 UTC m=+148.486934385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.172707 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f"] Jan 03 03:17:08 crc kubenswrapper[4746]: W0103 03:17:08.233878 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf42f8303_dc13_4de1_a5ab_a96a1606c0b8.slice/crio-f57a18ec429d6324d7fe386a54936a6bf477a1888b222da4ebfb148096394680 WatchSource:0}: Error finding container f57a18ec429d6324d7fe386a54936a6bf477a1888b222da4ebfb148096394680: Status 404 returned error can't find the container with id f57a18ec429d6324d7fe386a54936a6bf477a1888b222da4ebfb148096394680 Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.238501 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:08 crc kubenswrapper[4746]: E0103 03:17:08.238945 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:08.738922748 +0000 UTC m=+148.588813053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.340144 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:08 crc kubenswrapper[4746]: E0103 03:17:08.340909 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:08.840891398 +0000 UTC m=+148.690781693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.442774 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk"] Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.442795 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:08 crc kubenswrapper[4746]: E0103 03:17:08.442881 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:08.942863568 +0000 UTC m=+148.792753873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.445335 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:08 crc kubenswrapper[4746]: E0103 03:17:08.445690 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:08.945678185 +0000 UTC m=+148.795568490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.454068 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-wwvt9"] Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.535638 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l"] Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.547501 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:08 crc kubenswrapper[4746]: E0103 03:17:08.547957 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:09.047934442 +0000 UTC m=+148.897824747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.610478 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt"] Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.633771 4746 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-sw9vc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.633846 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.657421 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:08 crc kubenswrapper[4746]: E0103 03:17:08.658302 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:09.158284603 +0000 UTC m=+149.008174908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.672470 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mpsxq"] Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.763991 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:08 crc kubenswrapper[4746]: E0103 03:17:08.764979 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:09.264960745 +0000 UTC m=+149.114851050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.771144 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xgssz"] Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.785903 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4"] Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.823480 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-t84km"] Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.868216 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:08 crc kubenswrapper[4746]: E0103 03:17:08.868827 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:09.36881443 +0000 UTC m=+149.218704735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:08 crc kubenswrapper[4746]: I0103 03:17:08.974056 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:08 crc kubenswrapper[4746]: E0103 03:17:08.974335 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:09.474317215 +0000 UTC m=+149.324207520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.042889 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" event={"ID":"7ec04adb-b8c3-41f1-9eb9-5bb3625c8d08","Type":"ContainerStarted","Data":"a02fe17a6f13d85c03da898cb78ac6c4f84e7de79ee957a6aa45a17c45c15e26"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.079768 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:09 crc kubenswrapper[4746]: E0103 03:17:09.080180 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:09.580165038 +0000 UTC m=+149.430055343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.085969 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" podStartSLOduration=130.085947466 podStartE2EDuration="2m10.085947466s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.084472971 +0000 UTC m=+148.934363266" watchObservedRunningTime="2026-01-03 03:17:09.085947466 +0000 UTC m=+148.935837771" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.093741 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" event={"ID":"802e491b-8f4e-4cc7-b6df-756478ebbe1e","Type":"ContainerStarted","Data":"8c6d60323bd254212a61516c32ffc9ceefa67cfce89b6574bda388ef34ad2c15"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.144542 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mlkvc" podStartSLOduration=130.144515667 podStartE2EDuration="2m10.144515667s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.143956914 +0000 UTC m=+148.993847209" watchObservedRunningTime="2026-01-03 03:17:09.144515667 +0000 UTC m=+148.994405972" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.181595 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:09 crc kubenswrapper[4746]: E0103 03:17:09.182682 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:09.68264736 +0000 UTC m=+149.532537665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.200257 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lkd2p" event={"ID":"0a7bfb36-9f7e-4930-9e71-0173504b7ae6","Type":"ContainerStarted","Data":"93f73ed687c86d72d1f499f464309c49705a451b68a62f359bb31f78c86f5888"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.237437 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" event={"ID":"f42f8303-dc13-4de1-a5ab-a96a1606c0b8","Type":"ContainerStarted","Data":"f57a18ec429d6324d7fe386a54936a6bf477a1888b222da4ebfb148096394680"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.238572 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.248075 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" event={"ID":"bb934f19-6c2c-4b42-a0c7-829ec54c20ef","Type":"ContainerStarted","Data":"e54aa9f12a5dd59c3642a5d7682126600ed2b6e300226fddc5b7a074354e554f"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.248997 4746 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-k5qxt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.249037 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" podUID="f42f8303-dc13-4de1-a5ab-a96a1606c0b8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.249372 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lkd2p" podStartSLOduration=7.249353066 podStartE2EDuration="7.249353066s" podCreationTimestamp="2026-01-03 03:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.243979797 +0000 UTC m=+149.093870102" watchObservedRunningTime="2026-01-03 03:17:09.249353066 +0000 UTC m=+149.099243371" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.255164 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" event={"ID":"99d5586b-555b-4f25-8e1c-c329dffc92fc","Type":"ContainerStarted","Data":"d6262e2cce2ebf1953a9506efa692644394c033ba6603693a2c126103567c2d8"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.256052 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.266614 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1e90fa1da041f81700f9afb8dfbb8d42d80cefcedff2af376283a1d44b903a0c"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.267088 4746 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gr5gr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.267126 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" podUID="99d5586b-555b-4f25-8e1c-c329dffc92fc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.298325 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" event={"ID":"1b5b425d-89ff-4cf3-97c4-7263f3a345cf","Type":"ContainerStarted","Data":"ab42d10a97029dedbff16b5e0e24f3c9cce36b99174bff6828606aca8d56d099"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.301484 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:09 crc kubenswrapper[4746]: E0103 03:17:09.301787 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:09.80177465 +0000 UTC m=+149.651664965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.302959 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" podStartSLOduration=130.302938068 podStartE2EDuration="2m10.302938068s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.298154854 +0000 UTC m=+149.148045159" watchObservedRunningTime="2026-01-03 03:17:09.302938068 +0000 UTC m=+149.152828373" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.303820 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c3b1438e048e2c7a2a3af471379ca6a678a1d2711068881ae7a85e8050c43e48"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.319940 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lnzfg" event={"ID":"fc99dd78-6470-4b4c-8db2-d01982e37009","Type":"ContainerStarted","Data":"2b9f21c40eee836dafbfa8dbdf8924c8e40712745efa34fa3075471679f1f67e"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.371222 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" event={"ID":"fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27","Type":"ContainerStarted","Data":"3a33908b468ac4abb1dc117390524d7ed4c375346cfcf56faa81d29cc6f3657b"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.389038 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" event={"ID":"15a9ed2e-64d7-4917-a5fc-857b75246dd7","Type":"ContainerStarted","Data":"b471414cead322d354545445fcab7b5e7a6db0c3c241eebfb972c9a1b9ea1a21"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.389091 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" event={"ID":"15a9ed2e-64d7-4917-a5fc-857b75246dd7","Type":"ContainerStarted","Data":"fc5240ee1a53935355575fb44729ccac439aff498933a74f6ecbcbf49fd1057e"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.389511 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gksxd" podStartSLOduration=130.389495099 podStartE2EDuration="2m10.389495099s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.389038958 +0000 UTC m=+149.238929263" watchObservedRunningTime="2026-01-03 03:17:09.389495099 +0000 UTC m=+149.239385404" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.390603 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" event={"ID":"526d70c0-aa70-47f7-9daf-60c0e78d8dc2","Type":"ContainerStarted","Data":"fa9b0c5b589a6ca02c470969147a61748652755428b2f4dc04a7dbc8c72def02"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.391264 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" event={"ID":"c56b0f70-ca3e-431d-88f4-d7f518b67e9c","Type":"ContainerStarted","Data":"19e6e50ec1a31d9eab1943d6beef1acf25b1ed45e49ed6407004ba910d928911"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.391308 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" podStartSLOduration=130.391301573 podStartE2EDuration="2m10.391301573s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.350957777 +0000 UTC m=+149.200848082" watchObservedRunningTime="2026-01-03 03:17:09.391301573 +0000 UTC m=+149.241191878" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.391897 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" event={"ID":"078fac5f-8d76-4f20-9857-18b74a4ebab0","Type":"ContainerStarted","Data":"edfe5cb536bec859bd9857f99c26c436904b0f33bd0018829ce036f403a40139"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.393244 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" event={"ID":"844f9e49-a60f-445f-a3a2-c92bb3800691","Type":"ContainerStarted","Data":"0ddf7723af93cfd8cd20f34e52a843caf8becc3410ae3262683bed3684b9f1bb"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.402848 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:09 crc kubenswrapper[4746]: E0103 03:17:09.404363 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:09.904335144 +0000 UTC m=+149.754225610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.437716 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z2jrv" podStartSLOduration=130.437698753 podStartE2EDuration="2m10.437698753s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.420936352 +0000 UTC m=+149.270826657" watchObservedRunningTime="2026-01-03 03:17:09.437698753 +0000 UTC m=+149.287589048" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.443398 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nfw8x" event={"ID":"84efb631-0927-4470-9a6c-9af70fbdb9a0","Type":"ContainerStarted","Data":"b025698108575e7ce4f9b0dd91e36981188000baf0bc86376008c3bf8d08146d"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.459972 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-js77f" podStartSLOduration=130.459958675 podStartE2EDuration="2m10.459958675s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.458519161 +0000 UTC m=+149.308409466" watchObservedRunningTime="2026-01-03 03:17:09.459958675 +0000 UTC m=+149.309848980" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.512579 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" event={"ID":"27d52d81-bec6-495c-b080-d3244284d228","Type":"ContainerStarted","Data":"ee59e27e7272624780729d8a2117685069bd77aa1fbb09bd38fd17cc0938fdec"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.513614 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:09 crc kubenswrapper[4746]: E0103 03:17:09.517528 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:10.017506292 +0000 UTC m=+149.867396597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.533726 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" event={"ID":"c3185930-86ef-4dff-b4ec-0d60800fb76e","Type":"ContainerStarted","Data":"61b7ddfc1431f3cca987228e83b7167a085ab55197bce7c305d0b30e2ac267bb"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.553479 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" podStartSLOduration=129.553464333 podStartE2EDuration="2m9.553464333s" podCreationTimestamp="2026-01-03 03:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.551409214 +0000 UTC m=+149.401299519" watchObservedRunningTime="2026-01-03 03:17:09.553464333 +0000 UTC m=+149.403354638" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.570616 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-24ldq" event={"ID":"bda2c6fa-8070-438b-b801-0ac1a30822e8","Type":"ContainerStarted","Data":"f7d87f052499fbdab035a2bf4430f2dee242244ad3c3d8ce1b2f72a5b3571dfa"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.597761 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" event={"ID":"ff7b4792-fd79-4c60-bafa-c9f05f0e0deb","Type":"ContainerStarted","Data":"e057a5f0ed37e80ff77065236f4788d788c3caa3685a90c6e31ee43ce1c408d3"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.601147 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nfw8x" podStartSLOduration=130.601131403 podStartE2EDuration="2m10.601131403s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.59931466 +0000 UTC m=+149.449204965" watchObservedRunningTime="2026-01-03 03:17:09.601131403 +0000 UTC m=+149.451021708" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.614245 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:09 crc kubenswrapper[4746]: E0103 03:17:09.614644 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:10.114625646 +0000 UTC m=+149.964515951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.624367 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" event={"ID":"1edd4480-eb8f-4841-b34b-df768497de26","Type":"ContainerStarted","Data":"541e9244c13f8490a4ac071383b753c2b3cb5d7bbfb1c72451ec13c4a0104f85"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.653712 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k57gl" event={"ID":"caccb049-c5ee-4d59-8b3e-1c4f54b81f10","Type":"ContainerStarted","Data":"d83594773962101036a7a41b3cba751e66474c99bef36d3a6fbf489c8272fc37"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.693973 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" event={"ID":"0e7ad5c3-05bf-4244-a5c9-e218138c0ceb","Type":"ContainerStarted","Data":"009d49bd573b9c6c8d97a684a7dc6733114a3fe9cb6289c47412b24b13ad29b1"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.700569 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2d9l9" podStartSLOduration=130.700555102 podStartE2EDuration="2m10.700555102s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.653726842 +0000 UTC m=+149.503617147" watchObservedRunningTime="2026-01-03 03:17:09.700555102 +0000 UTC m=+149.550445407" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.722511 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-khnmh" event={"ID":"d5a89946-a489-411e-8e5d-07e166de5088","Type":"ContainerStarted","Data":"794c0bc716fe7c87c90aef9d0b8802e4ad59e2649aa9b20fbcbd3811ccba6527"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.722544 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:09 crc kubenswrapper[4746]: E0103 03:17:09.723769 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:10.223755918 +0000 UTC m=+150.073646213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.761395 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-24ldq" podStartSLOduration=7.761363347 podStartE2EDuration="7.761363347s" podCreationTimestamp="2026-01-03 03:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.700841279 +0000 UTC m=+149.550731584" watchObservedRunningTime="2026-01-03 03:17:09.761363347 +0000 UTC m=+149.611253672" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.762477 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" podStartSLOduration=130.762471084 podStartE2EDuration="2m10.762471084s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.759551904 +0000 UTC m=+149.609442209" watchObservedRunningTime="2026-01-03 03:17:09.762471084 +0000 UTC m=+149.612361389" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.775876 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx" event={"ID":"5bebc5d9-35a7-4154-873e-65d60f85f9b6","Type":"ContainerStarted","Data":"423d03c25766585072f46c44045b34666a067db37cdeb19a6c5dae5b996953ae"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.775917 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx" event={"ID":"5bebc5d9-35a7-4154-873e-65d60f85f9b6","Type":"ContainerStarted","Data":"16f998b81f6bf787cfe59950eb3880081eb49f67a37b93a90e961a87b30dfe53"} Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.792167 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.797417 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fcxcc" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.821477 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v4lzc" podStartSLOduration=131.821454455 podStartE2EDuration="2m11.821454455s" podCreationTimestamp="2026-01-03 03:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.820006211 +0000 UTC m=+149.669896506" watchObservedRunningTime="2026-01-03 03:17:09.821454455 +0000 UTC m=+149.671344760" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.826782 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:09 crc kubenswrapper[4746]: E0103 03:17:09.829027 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:10.329005296 +0000 UTC m=+150.178895601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.857775 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.861923 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.911243 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-k57gl" podStartSLOduration=130.911230824 podStartE2EDuration="2m10.911230824s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:09.910086156 +0000 UTC m=+149.759976461" watchObservedRunningTime="2026-01-03 03:17:09.911230824 +0000 UTC m=+149.761121129" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.935764 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:09 crc kubenswrapper[4746]: E0103 03:17:09.936224 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:10.436207691 +0000 UTC m=+150.286097996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.957394 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.972752 4746 patch_prober.go:28] interesting pod/router-default-5444994796-k57gl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 03:17:09 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 03 03:17:09 crc kubenswrapper[4746]: [+]process-running ok Jan 03 03:17:09 crc kubenswrapper[4746]: healthz check failed Jan 03 03:17:09 crc kubenswrapper[4746]: I0103 03:17:09.972805 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k57gl" podUID="caccb049-c5ee-4d59-8b3e-1c4f54b81f10" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.026700 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sp7t5" Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.039409 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:10 crc kubenswrapper[4746]: E0103 03:17:10.040472 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:10.540441365 +0000 UTC m=+150.390331670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.079040 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-khnmh" podStartSLOduration=131.079006358 podStartE2EDuration="2m11.079006358s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:10.068441465 +0000 UTC m=+149.918331770" watchObservedRunningTime="2026-01-03 03:17:10.079006358 +0000 UTC m=+149.928896663" Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.120454 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bj7mx" podStartSLOduration=131.12043778 podStartE2EDuration="2m11.12043778s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:10.11877657 +0000 UTC m=+149.968666875" watchObservedRunningTime="2026-01-03 03:17:10.12043778 +0000 UTC m=+149.970328085" Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.143531 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:10 crc kubenswrapper[4746]: E0103 03:17:10.144015 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:10.644003733 +0000 UTC m=+150.493894038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.236689 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.244853 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:10 crc kubenswrapper[4746]: E0103 03:17:10.246341 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:10.746319292 +0000 UTC m=+150.596209597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.347322 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:10 crc kubenswrapper[4746]: E0103 03:17:10.347638 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:10.847627356 +0000 UTC m=+150.697517661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.448197 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:10 crc kubenswrapper[4746]: E0103 03:17:10.448499 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:10.948480509 +0000 UTC m=+150.798370814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.448626 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:10 crc kubenswrapper[4746]: E0103 03:17:10.448912 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:10.948902779 +0000 UTC m=+150.798793084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.550277 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:10 crc kubenswrapper[4746]: E0103 03:17:10.550611 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:11.050593772 +0000 UTC m=+150.900484077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.653587 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:10 crc kubenswrapper[4746]: E0103 03:17:10.653978 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:11.153964705 +0000 UTC m=+151.003855010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.754947 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:10 crc kubenswrapper[4746]: E0103 03:17:10.756144 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:11.256115929 +0000 UTC m=+151.106006234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.823257 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" event={"ID":"526d70c0-aa70-47f7-9daf-60c0e78d8dc2","Type":"ContainerStarted","Data":"eb8273c00e480c8a647dfe7285d4d13cbcd7266bc28129b1bc9bc8cd6bf18ce6"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.823316 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" event={"ID":"526d70c0-aa70-47f7-9daf-60c0e78d8dc2","Type":"ContainerStarted","Data":"98294c03b25c4aac9e895a53eddfa692eb56c37b488a94dbd486c3ce57053f5d"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.839049 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" event={"ID":"f0da58a9-1a93-439c-8f83-8eba0c4a9961","Type":"ContainerStarted","Data":"294d37a507ddbfcee9578a6574ec7650931221d4ff34ab884c7b52f575b68414"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.839114 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" event={"ID":"f0da58a9-1a93-439c-8f83-8eba0c4a9961","Type":"ContainerStarted","Data":"a0b9f007d37aaf3b30d2d45872594452295242411db59e91ac50679a201b4756"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.839128 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" event={"ID":"f0da58a9-1a93-439c-8f83-8eba0c4a9961","Type":"ContainerStarted","Data":"f5ff3f703652aa068430145c5bb8cdad5fb15e29688b91f31ed2c0100c48e5e7"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.860943 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:10 crc kubenswrapper[4746]: E0103 03:17:10.861553 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:11.361540042 +0000 UTC m=+151.211430347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.863475 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kcq6f" event={"ID":"0e7ad5c3-05bf-4244-a5c9-e218138c0ceb","Type":"ContainerStarted","Data":"c758b704214a87dffaf44968ae72e3f2aedb4ae5f328234fc23bc83f301d1d03"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.913069 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"caad18506203018cbf0afc6e62dccd37afc86c1f4e042bf6883949dc39895693"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.938840 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e0bf144ba95eb3b22c6487166a61215a8f395974b0f551e2b7668c2ed4743662"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.958954 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xgssz" event={"ID":"f3150af1-742e-4b09-afe0-a819ddedc864","Type":"ContainerStarted","Data":"41db0d433b40e396c4d6084d4ab6bdd88d3519c143239f8ca7b23a8e475c46b6"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.959004 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xgssz" event={"ID":"f3150af1-742e-4b09-afe0-a819ddedc864","Type":"ContainerStarted","Data":"d0c499643682c75f2d33f3d8fbf7b02f64e510e733c74c69c63ede61cd32d7a8"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.962527 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:10 crc kubenswrapper[4746]: E0103 03:17:10.962884 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:11.462838336 +0000 UTC m=+151.312728651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.963097 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:10 crc kubenswrapper[4746]: E0103 03:17:10.964109 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:11.464087276 +0000 UTC m=+151.313977581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.964767 4746 patch_prober.go:28] interesting pod/router-default-5444994796-k57gl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 03:17:10 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 03 03:17:10 crc kubenswrapper[4746]: [+]process-running ok Jan 03 03:17:10 crc kubenswrapper[4746]: healthz check failed Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.964962 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k57gl" podUID="caccb049-c5ee-4d59-8b3e-1c4f54b81f10" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.981588 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-lnzfg" event={"ID":"fc99dd78-6470-4b4c-8db2-d01982e37009","Type":"ContainerStarted","Data":"c3720030370ed75356b791ef1cfd33834b5a10dc101b8f6e5f0cd2a873534dea"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.994859 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d5824aa16c234adc7d613d0c9cb589c923148479094d9fc41d9f59efdfb92bd7"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.994908 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6d4ed90d5f928a95f1bf6f217635149b8ef7e11287bd66a737694a77a4c284d7"} Jan 03 03:17:10 crc kubenswrapper[4746]: I0103 03:17:10.995512 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.015852 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" event={"ID":"f42f8303-dc13-4de1-a5ab-a96a1606c0b8","Type":"ContainerStarted","Data":"f87e4d8075145de6ba50dd8b7e8382ab706fc6b7361603fff40153a8ae1eb4f0"} Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.039106 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-khnmh" event={"ID":"d5a89946-a489-411e-8e5d-07e166de5088","Type":"ContainerStarted","Data":"ee8888dd113274bbfbba7137ce1d7321af624bd7c8b89bf0bec3a3f26e5b939e"} Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.050895 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" event={"ID":"99d5586b-555b-4f25-8e1c-c329dffc92fc","Type":"ContainerStarted","Data":"82a671ce24ffd53cf1399fc50121dcfc1a789fd40064836e6c2b173cf3949b87"} Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.068104 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:11 crc kubenswrapper[4746]: E0103 03:17:11.069590 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:11.56956819 +0000 UTC m=+151.419458525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.071393 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" event={"ID":"fccbd1a1-dd7a-4e3b-9f2d-f6a480d71e27","Type":"ContainerStarted","Data":"f5a470a78e7fedf172377952f2f285012c49151c3ba7a1562fe657ca429383b1"} Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.071664 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.085597 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-k5qxt" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.086856 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" event={"ID":"c3185930-86ef-4dff-b4ec-0d60800fb76e","Type":"ContainerStarted","Data":"2234ef10086d99facd5e1e790b5b33c8cbf172df97890c5334d7f99fb4c8027c"} Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.086966 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" event={"ID":"c3185930-86ef-4dff-b4ec-0d60800fb76e","Type":"ContainerStarted","Data":"421b65d6a2bb7158850faf834821fabef8b8f710564e0e238c6ce5028001de0e"} Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.087584 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.093780 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" event={"ID":"c56b0f70-ca3e-431d-88f4-d7f518b67e9c","Type":"ContainerStarted","Data":"67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17"} Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.094527 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.097735 4746 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mpsxq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.097848 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" podUID="c56b0f70-ca3e-431d-88f4-d7f518b67e9c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.107849 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" event={"ID":"078fac5f-8d76-4f20-9857-18b74a4ebab0","Type":"ContainerStarted","Data":"bb3a0ad63e9412831b2e83cf51f3399723719d456f99df10e580e1176acd5252"} Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.130109 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t84km" event={"ID":"92be181f-28e2-4b83-a7de-669db662b52c","Type":"ContainerStarted","Data":"0bbac951de73a1b8bd113315df3233e8a81d3c5873a126d6043a4f7b0fdcf6a9"} Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.130152 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t84km" event={"ID":"92be181f-28e2-4b83-a7de-669db662b52c","Type":"ContainerStarted","Data":"df86644450be7c08c3731f052269f5ade8d8db51fbcded7b53986d663921eba9"} Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.130186 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t84km" event={"ID":"92be181f-28e2-4b83-a7de-669db662b52c","Type":"ContainerStarted","Data":"bdad1ae6e255955eb3f157d5abf97e8a9d69c5f6222e3ee7b69af117f9dd345b"} Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.130915 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-t84km" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.159123 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" event={"ID":"27d52d81-bec6-495c-b080-d3244284d228","Type":"ContainerStarted","Data":"6e1df7d42fd5a3d95048d56f6fc5421b50dfb8eab4cbb45d37080449ad4ba29c"} Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.173438 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.179763 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tggg2" Jan 03 03:17:11 crc kubenswrapper[4746]: E0103 03:17:11.180265 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:11.680250188 +0000 UTC m=+151.530140493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.237948 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.274141 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:11 crc kubenswrapper[4746]: E0103 03:17:11.280747 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:11.780720922 +0000 UTC m=+151.630611227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.316959 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kkq7l" podStartSLOduration=132.316941579 podStartE2EDuration="2m12.316941579s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:11.270877047 +0000 UTC m=+151.120767352" watchObservedRunningTime="2026-01-03 03:17:11.316941579 +0000 UTC m=+151.166831884" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.324102 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" podStartSLOduration=132.324073659 podStartE2EDuration="2m12.324073659s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:11.316029847 +0000 UTC m=+151.165920152" watchObservedRunningTime="2026-01-03 03:17:11.324073659 +0000 UTC m=+151.173963984" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.392688 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:11 crc kubenswrapper[4746]: E0103 03:17:11.393114 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:11.893101891 +0000 UTC m=+151.742992196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.437891 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" podStartSLOduration=132.437872483 podStartE2EDuration="2m12.437872483s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:11.391605915 +0000 UTC m=+151.241496220" watchObservedRunningTime="2026-01-03 03:17:11.437872483 +0000 UTC m=+151.287762778" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.480783 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-wwvt9" podStartSLOduration=132.480765769 podStartE2EDuration="2m12.480765769s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:11.480420181 +0000 UTC m=+151.330310486" watchObservedRunningTime="2026-01-03 03:17:11.480765769 +0000 UTC m=+151.330656074" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.483459 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-lnzfg" podStartSLOduration=132.483446433 podStartE2EDuration="2m12.483446433s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:11.436961381 +0000 UTC m=+151.286851686" watchObservedRunningTime="2026-01-03 03:17:11.483446433 +0000 UTC m=+151.333336738" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.494097 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:11 crc kubenswrapper[4746]: E0103 03:17:11.494341 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:11.994326463 +0000 UTC m=+151.844216768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.595160 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:11 crc kubenswrapper[4746]: E0103 03:17:11.595519 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:12.095504784 +0000 UTC m=+151.945395089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.608194 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rq6g4" podStartSLOduration=132.608179978 podStartE2EDuration="2m12.608179978s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:11.602741818 +0000 UTC m=+151.452632133" watchObservedRunningTime="2026-01-03 03:17:11.608179978 +0000 UTC m=+151.458070283" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.633548 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" podStartSLOduration=132.633532394 podStartE2EDuration="2m12.633532394s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:11.631513876 +0000 UTC m=+151.481404191" watchObservedRunningTime="2026-01-03 03:17:11.633532394 +0000 UTC m=+151.483422699" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.696440 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:11 crc kubenswrapper[4746]: E0103 03:17:11.696826 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:12.196798448 +0000 UTC m=+152.046688743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.797835 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:11 crc kubenswrapper[4746]: E0103 03:17:11.798191 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:12.298180104 +0000 UTC m=+152.148070399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.861136 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-t84km" podStartSLOduration=9.86111479 podStartE2EDuration="9.86111479s" podCreationTimestamp="2026-01-03 03:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:11.823058249 +0000 UTC m=+151.672948554" watchObservedRunningTime="2026-01-03 03:17:11.86111479 +0000 UTC m=+151.711005095" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.893217 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5kwqt" podStartSLOduration=132.893200488 podStartE2EDuration="2m12.893200488s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:11.861986621 +0000 UTC m=+151.711876926" watchObservedRunningTime="2026-01-03 03:17:11.893200488 +0000 UTC m=+151.743090793" Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.900105 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:11 crc kubenswrapper[4746]: E0103 03:17:11.900460 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:12.400447661 +0000 UTC m=+152.250337966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.961690 4746 patch_prober.go:28] interesting pod/router-default-5444994796-k57gl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 03:17:11 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 03 03:17:11 crc kubenswrapper[4746]: [+]process-running ok Jan 03 03:17:11 crc kubenswrapper[4746]: healthz check failed Jan 03 03:17:11 crc kubenswrapper[4746]: I0103 03:17:11.961742 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k57gl" podUID="caccb049-c5ee-4d59-8b3e-1c4f54b81f10" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.001333 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.001604 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:12.501593921 +0000 UTC m=+152.351484216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.052761 4746 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-gr5gr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.052826 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" podUID="99d5586b-555b-4f25-8e1c-c329dffc92fc" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.102338 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.102523 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:12.602497096 +0000 UTC m=+152.452387401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.102578 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.102924 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:12.602915086 +0000 UTC m=+152.452805391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.179689 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xgssz" event={"ID":"f3150af1-742e-4b09-afe0-a819ddedc864","Type":"ContainerStarted","Data":"63a82b1f308d837313c1a342eb92c518d084566eef5c90262052a8072c50df0a"} Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.181505 4746 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mpsxq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.181585 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" podUID="c56b0f70-ca3e-431d-88f4-d7f518b67e9c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.203256 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.204883 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:12.704840145 +0000 UTC m=+152.554730450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.207788 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.208330 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:12.708315148 +0000 UTC m=+152.558205453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.303946 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-gr5gr" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.313472 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.314429 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:12.814405716 +0000 UTC m=+152.664296021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.416441 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.416835 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:12.916819977 +0000 UTC m=+152.766710282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.517229 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.517955 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.017931186 +0000 UTC m=+152.867821491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.518099 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.518355 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.018348366 +0000 UTC m=+152.868238661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.618779 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.618981 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.118941353 +0000 UTC m=+152.968831658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.619098 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.619489 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.119466416 +0000 UTC m=+152.969356721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.669958 4746 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.720347 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.720910 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.220882193 +0000 UTC m=+153.070772488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.794822 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l57js"] Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.796503 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.802514 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.823212 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8twv\" (UniqueName: \"kubernetes.io/projected/739b93d8-31f7-4ba5-861f-1e0579358067-kube-api-access-d8twv\") pod \"certified-operators-l57js\" (UID: \"739b93d8-31f7-4ba5-861f-1e0579358067\") " pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.823290 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/739b93d8-31f7-4ba5-861f-1e0579358067-utilities\") pod \"certified-operators-l57js\" (UID: \"739b93d8-31f7-4ba5-861f-1e0579358067\") " pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.823310 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/739b93d8-31f7-4ba5-861f-1e0579358067-catalog-content\") pod \"certified-operators-l57js\" (UID: \"739b93d8-31f7-4ba5-861f-1e0579358067\") " pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.823333 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.823681 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.323664662 +0000 UTC m=+153.173554967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.824250 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l57js"] Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.924116 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.924341 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.424313431 +0000 UTC m=+153.274203736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.924424 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/739b93d8-31f7-4ba5-861f-1e0579358067-utilities\") pod \"certified-operators-l57js\" (UID: \"739b93d8-31f7-4ba5-861f-1e0579358067\") " pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.924448 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/739b93d8-31f7-4ba5-861f-1e0579358067-catalog-content\") pod \"certified-operators-l57js\" (UID: \"739b93d8-31f7-4ba5-861f-1e0579358067\") " pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.924486 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.924650 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8twv\" (UniqueName: \"kubernetes.io/projected/739b93d8-31f7-4ba5-861f-1e0579358067-kube-api-access-d8twv\") pod \"certified-operators-l57js\" (UID: \"739b93d8-31f7-4ba5-861f-1e0579358067\") " pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.924944 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/739b93d8-31f7-4ba5-861f-1e0579358067-catalog-content\") pod \"certified-operators-l57js\" (UID: \"739b93d8-31f7-4ba5-861f-1e0579358067\") " pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:17:12 crc kubenswrapper[4746]: E0103 03:17:12.924969 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.424957656 +0000 UTC m=+153.274847961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.925094 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/739b93d8-31f7-4ba5-861f-1e0579358067-utilities\") pod \"certified-operators-l57js\" (UID: \"739b93d8-31f7-4ba5-861f-1e0579358067\") " pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.944218 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8twv\" (UniqueName: \"kubernetes.io/projected/739b93d8-31f7-4ba5-861f-1e0579358067-kube-api-access-d8twv\") pod \"certified-operators-l57js\" (UID: \"739b93d8-31f7-4ba5-861f-1e0579358067\") " pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.953823 4746 patch_prober.go:28] interesting pod/router-default-5444994796-k57gl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 03:17:12 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 03 03:17:12 crc kubenswrapper[4746]: [+]process-running ok Jan 03 03:17:12 crc kubenswrapper[4746]: healthz check failed Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.953912 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k57gl" podUID="caccb049-c5ee-4d59-8b3e-1c4f54b81f10" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.981366 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nssxg"] Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.982614 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.984490 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 03 03:17:12 crc kubenswrapper[4746]: I0103 03:17:12.994867 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nssxg"] Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.025821 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:13 crc kubenswrapper[4746]: E0103 03:17:13.026022 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.525979453 +0000 UTC m=+153.375869758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.026391 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.026451 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcqh7\" (UniqueName: \"kubernetes.io/projected/c7e2ce03-275f-447c-bf55-f915ece6d479-kube-api-access-wcqh7\") pod \"community-operators-nssxg\" (UID: \"c7e2ce03-275f-447c-bf55-f915ece6d479\") " pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.026495 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e2ce03-275f-447c-bf55-f915ece6d479-catalog-content\") pod \"community-operators-nssxg\" (UID: \"c7e2ce03-275f-447c-bf55-f915ece6d479\") " pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.026543 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e2ce03-275f-447c-bf55-f915ece6d479-utilities\") pod \"community-operators-nssxg\" (UID: \"c7e2ce03-275f-447c-bf55-f915ece6d479\") " pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:17:13 crc kubenswrapper[4746]: E0103 03:17:13.026738 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.526726271 +0000 UTC m=+153.376616576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.110396 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.127838 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:13 crc kubenswrapper[4746]: E0103 03:17:13.128016 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.627989304 +0000 UTC m=+153.477879609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.128319 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e2ce03-275f-447c-bf55-f915ece6d479-utilities\") pod \"community-operators-nssxg\" (UID: \"c7e2ce03-275f-447c-bf55-f915ece6d479\") " pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.128436 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.128512 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcqh7\" (UniqueName: \"kubernetes.io/projected/c7e2ce03-275f-447c-bf55-f915ece6d479-kube-api-access-wcqh7\") pod \"community-operators-nssxg\" (UID: \"c7e2ce03-275f-447c-bf55-f915ece6d479\") " pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.128615 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e2ce03-275f-447c-bf55-f915ece6d479-catalog-content\") pod \"community-operators-nssxg\" (UID: \"c7e2ce03-275f-447c-bf55-f915ece6d479\") " pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.129579 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e2ce03-275f-447c-bf55-f915ece6d479-catalog-content\") pod \"community-operators-nssxg\" (UID: \"c7e2ce03-275f-447c-bf55-f915ece6d479\") " pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.129997 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e2ce03-275f-447c-bf55-f915ece6d479-utilities\") pod \"community-operators-nssxg\" (UID: \"c7e2ce03-275f-447c-bf55-f915ece6d479\") " pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:17:13 crc kubenswrapper[4746]: E0103 03:17:13.130033 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.630012853 +0000 UTC m=+153.479903208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.152394 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcqh7\" (UniqueName: \"kubernetes.io/projected/c7e2ce03-275f-447c-bf55-f915ece6d479-kube-api-access-wcqh7\") pod \"community-operators-nssxg\" (UID: \"c7e2ce03-275f-447c-bf55-f915ece6d479\") " pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.178034 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zdjl4"] Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.178930 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.188759 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xgssz" event={"ID":"f3150af1-742e-4b09-afe0-a819ddedc864","Type":"ContainerStarted","Data":"52dd03c097b7c8000e5898353a79ee56f61be3a773721ce951703242c42cf7a5"} Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.188810 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xgssz" event={"ID":"f3150af1-742e-4b09-afe0-a819ddedc864","Type":"ContainerStarted","Data":"02d821006345add0b36c5248ce630d117e8a54d9d5ce9b16ad5d29d59cba2588"} Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.192132 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zdjl4"] Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.194300 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.229871 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:13 crc kubenswrapper[4746]: E0103 03:17:13.230140 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.730095207 +0000 UTC m=+153.579985512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.236841 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fd97dc-b59e-4136-9b1e-2084eee07a32-catalog-content\") pod \"certified-operators-zdjl4\" (UID: \"10fd97dc-b59e-4136-9b1e-2084eee07a32\") " pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.236892 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fd97dc-b59e-4136-9b1e-2084eee07a32-utilities\") pod \"certified-operators-zdjl4\" (UID: \"10fd97dc-b59e-4136-9b1e-2084eee07a32\") " pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.236996 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.237228 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8tfs\" (UniqueName: \"kubernetes.io/projected/10fd97dc-b59e-4136-9b1e-2084eee07a32-kube-api-access-j8tfs\") pod \"certified-operators-zdjl4\" (UID: \"10fd97dc-b59e-4136-9b1e-2084eee07a32\") " pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:17:13 crc kubenswrapper[4746]: E0103 03:17:13.240828 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.740804324 +0000 UTC m=+153.590694799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.304099 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.338388 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.338687 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8tfs\" (UniqueName: \"kubernetes.io/projected/10fd97dc-b59e-4136-9b1e-2084eee07a32-kube-api-access-j8tfs\") pod \"certified-operators-zdjl4\" (UID: \"10fd97dc-b59e-4136-9b1e-2084eee07a32\") " pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.339365 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fd97dc-b59e-4136-9b1e-2084eee07a32-catalog-content\") pod \"certified-operators-zdjl4\" (UID: \"10fd97dc-b59e-4136-9b1e-2084eee07a32\") " pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.339394 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fd97dc-b59e-4136-9b1e-2084eee07a32-utilities\") pod \"certified-operators-zdjl4\" (UID: \"10fd97dc-b59e-4136-9b1e-2084eee07a32\") " pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.340155 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fd97dc-b59e-4136-9b1e-2084eee07a32-utilities\") pod \"certified-operators-zdjl4\" (UID: \"10fd97dc-b59e-4136-9b1e-2084eee07a32\") " pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:17:13 crc kubenswrapper[4746]: E0103 03:17:13.340250 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.840195902 +0000 UTC m=+153.690086207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.340933 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fd97dc-b59e-4136-9b1e-2084eee07a32-catalog-content\") pod \"certified-operators-zdjl4\" (UID: \"10fd97dc-b59e-4136-9b1e-2084eee07a32\") " pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.360000 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8tfs\" (UniqueName: \"kubernetes.io/projected/10fd97dc-b59e-4136-9b1e-2084eee07a32-kube-api-access-j8tfs\") pod \"certified-operators-zdjl4\" (UID: \"10fd97dc-b59e-4136-9b1e-2084eee07a32\") " pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.377990 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xgssz" podStartSLOduration=11.377966266 podStartE2EDuration="11.377966266s" podCreationTimestamp="2026-01-03 03:17:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:13.268200399 +0000 UTC m=+153.118090704" watchObservedRunningTime="2026-01-03 03:17:13.377966266 +0000 UTC m=+153.227856571" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.382746 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l57js"] Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.385513 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s9pk6"] Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.386938 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:17:13 crc kubenswrapper[4746]: W0103 03:17:13.388512 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod739b93d8_31f7_4ba5_861f_1e0579358067.slice/crio-8087ed371b3754b5cfcd5bad125e0e11baef54f52546474e9d148356ce335800 WatchSource:0}: Error finding container 8087ed371b3754b5cfcd5bad125e0e11baef54f52546474e9d148356ce335800: Status 404 returned error can't find the container with id 8087ed371b3754b5cfcd5bad125e0e11baef54f52546474e9d148356ce335800 Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.413440 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s9pk6"] Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.440749 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fljzk\" (UniqueName: \"kubernetes.io/projected/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-kube-api-access-fljzk\") pod \"community-operators-s9pk6\" (UID: \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\") " pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.441890 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-catalog-content\") pod \"community-operators-s9pk6\" (UID: \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\") " pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.441945 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-utilities\") pod \"community-operators-s9pk6\" (UID: \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\") " pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.441999 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:13 crc kubenswrapper[4746]: E0103 03:17:13.442255 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-03 03:17:13.942242604 +0000 UTC m=+153.792132899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ndqm2" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.490570 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.543173 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.543335 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fljzk\" (UniqueName: \"kubernetes.io/projected/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-kube-api-access-fljzk\") pod \"community-operators-s9pk6\" (UID: \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\") " pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.543368 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-catalog-content\") pod \"community-operators-s9pk6\" (UID: \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\") " pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.543414 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-utilities\") pod \"community-operators-s9pk6\" (UID: \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\") " pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.543842 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-utilities\") pod \"community-operators-s9pk6\" (UID: \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\") " pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.546975 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nssxg"] Jan 03 03:17:13 crc kubenswrapper[4746]: E0103 03:17:13.547063 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-03 03:17:14.047046992 +0000 UTC m=+153.896937287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.547520 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-catalog-content\") pod \"community-operators-s9pk6\" (UID: \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\") " pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.563313 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fljzk\" (UniqueName: \"kubernetes.io/projected/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-kube-api-access-fljzk\") pod \"community-operators-s9pk6\" (UID: \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\") " pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.619889 4746 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-03T03:17:12.669985595Z","Handler":null,"Name":""} Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.623826 4746 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.623900 4746 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.645585 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.648165 4746 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.648200 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.675368 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ndqm2\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.707200 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zdjl4"] Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.710781 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.746368 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.751554 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.885335 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s9pk6"] Jan 03 03:17:13 crc kubenswrapper[4746]: W0103 03:17:13.924109 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec6b70ea_cb0f_4bff_a489_69a988a0db5f.slice/crio-aaabe690942bf30d7f9923fd5cdd643337248b95dfbca218730d2de37e2a4790 WatchSource:0}: Error finding container aaabe690942bf30d7f9923fd5cdd643337248b95dfbca218730d2de37e2a4790: Status 404 returned error can't find the container with id aaabe690942bf30d7f9923fd5cdd643337248b95dfbca218730d2de37e2a4790 Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.962926 4746 patch_prober.go:28] interesting pod/router-default-5444994796-k57gl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 03:17:13 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 03 03:17:13 crc kubenswrapper[4746]: [+]process-running ok Jan 03 03:17:13 crc kubenswrapper[4746]: healthz check failed Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.962996 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k57gl" podUID="caccb049-c5ee-4d59-8b3e-1c4f54b81f10" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 03:17:13 crc kubenswrapper[4746]: I0103 03:17:13.975591 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.159252 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ndqm2"] Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.198439 4746 generic.go:334] "Generic (PLEG): container finished" podID="c7e2ce03-275f-447c-bf55-f915ece6d479" containerID="f729f0c23d6531894cab285df1d009d0d5b7945d820aefe0d3268b4a083304dd" exitCode=0 Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.198789 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nssxg" event={"ID":"c7e2ce03-275f-447c-bf55-f915ece6d479","Type":"ContainerDied","Data":"f729f0c23d6531894cab285df1d009d0d5b7945d820aefe0d3268b4a083304dd"} Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.198900 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nssxg" event={"ID":"c7e2ce03-275f-447c-bf55-f915ece6d479","Type":"ContainerStarted","Data":"83306bf9e1ffb9ddabd7a8fb6654b4ba61816fd1ac619efd9212bd39253ef971"} Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.201183 4746 generic.go:334] "Generic (PLEG): container finished" podID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" containerID="4e98046c8da9dc92a0ec2cd658df3dce64ec2cbaf6af6b39f04deb647411c67b" exitCode=0 Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.201275 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9pk6" event={"ID":"ec6b70ea-cb0f-4bff-a489-69a988a0db5f","Type":"ContainerDied","Data":"4e98046c8da9dc92a0ec2cd658df3dce64ec2cbaf6af6b39f04deb647411c67b"} Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.201306 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9pk6" event={"ID":"ec6b70ea-cb0f-4bff-a489-69a988a0db5f","Type":"ContainerStarted","Data":"aaabe690942bf30d7f9923fd5cdd643337248b95dfbca218730d2de37e2a4790"} Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.201428 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.208591 4746 generic.go:334] "Generic (PLEG): container finished" podID="15a9ed2e-64d7-4917-a5fc-857b75246dd7" containerID="b471414cead322d354545445fcab7b5e7a6db0c3c241eebfb972c9a1b9ea1a21" exitCode=0 Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.208812 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" event={"ID":"15a9ed2e-64d7-4917-a5fc-857b75246dd7","Type":"ContainerDied","Data":"b471414cead322d354545445fcab7b5e7a6db0c3c241eebfb972c9a1b9ea1a21"} Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.211845 4746 generic.go:334] "Generic (PLEG): container finished" podID="739b93d8-31f7-4ba5-861f-1e0579358067" containerID="10cb2c3e010f8aee6c7877918e5554f9367940b92d5ed4e164e268c1c5439bd2" exitCode=0 Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.212011 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l57js" event={"ID":"739b93d8-31f7-4ba5-861f-1e0579358067","Type":"ContainerDied","Data":"10cb2c3e010f8aee6c7877918e5554f9367940b92d5ed4e164e268c1c5439bd2"} Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.212294 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l57js" event={"ID":"739b93d8-31f7-4ba5-861f-1e0579358067","Type":"ContainerStarted","Data":"8087ed371b3754b5cfcd5bad125e0e11baef54f52546474e9d148356ce335800"} Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.214612 4746 generic.go:334] "Generic (PLEG): container finished" podID="10fd97dc-b59e-4136-9b1e-2084eee07a32" containerID="2ff5663e35199075bbfbeb93811ab79ba84b57b05149bea445f99bbefe886d94" exitCode=0 Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.214890 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdjl4" event={"ID":"10fd97dc-b59e-4136-9b1e-2084eee07a32","Type":"ContainerDied","Data":"2ff5663e35199075bbfbeb93811ab79ba84b57b05149bea445f99bbefe886d94"} Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.215030 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdjl4" event={"ID":"10fd97dc-b59e-4136-9b1e-2084eee07a32","Type":"ContainerStarted","Data":"f81348a25436645c262077f487275c605ab943b4e972511983817da40743298a"} Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.217437 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" event={"ID":"d3da68b1-7a82-4adc-81ae-d9edc00d3c32","Type":"ContainerStarted","Data":"289c38a178189ee1bdeaa9655e51cd66eb43947931e659074965dd391273cb6f"} Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.472771 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.768391 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.769483 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.772859 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.781455 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.783523 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vhm58"] Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.786488 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.789751 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.792009 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.811230 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhm58"] Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.864468 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59334901-9cf4-47a8-bdd6-bd5d1567a628-utilities\") pod \"redhat-marketplace-vhm58\" (UID: \"59334901-9cf4-47a8-bdd6-bd5d1567a628\") " pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.864607 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.864802 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59334901-9cf4-47a8-bdd6-bd5d1567a628-catalog-content\") pod \"redhat-marketplace-vhm58\" (UID: \"59334901-9cf4-47a8-bdd6-bd5d1567a628\") " pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.864872 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbt6v\" (UniqueName: \"kubernetes.io/projected/59334901-9cf4-47a8-bdd6-bd5d1567a628-kube-api-access-cbt6v\") pod \"redhat-marketplace-vhm58\" (UID: \"59334901-9cf4-47a8-bdd6-bd5d1567a628\") " pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.865066 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.872003 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.872056 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.881780 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.888649 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-j2jgm" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.955296 4746 patch_prober.go:28] interesting pod/router-default-5444994796-k57gl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 03:17:14 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 03 03:17:14 crc kubenswrapper[4746]: [+]process-running ok Jan 03 03:17:14 crc kubenswrapper[4746]: healthz check failed Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.955368 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k57gl" podUID="caccb049-c5ee-4d59-8b3e-1c4f54b81f10" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.966867 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.966977 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59334901-9cf4-47a8-bdd6-bd5d1567a628-utilities\") pod \"redhat-marketplace-vhm58\" (UID: \"59334901-9cf4-47a8-bdd6-bd5d1567a628\") " pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.967006 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.967033 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59334901-9cf4-47a8-bdd6-bd5d1567a628-catalog-content\") pod \"redhat-marketplace-vhm58\" (UID: \"59334901-9cf4-47a8-bdd6-bd5d1567a628\") " pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.967057 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbt6v\" (UniqueName: \"kubernetes.io/projected/59334901-9cf4-47a8-bdd6-bd5d1567a628-kube-api-access-cbt6v\") pod \"redhat-marketplace-vhm58\" (UID: \"59334901-9cf4-47a8-bdd6-bd5d1567a628\") " pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.968824 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.969846 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59334901-9cf4-47a8-bdd6-bd5d1567a628-utilities\") pod \"redhat-marketplace-vhm58\" (UID: \"59334901-9cf4-47a8-bdd6-bd5d1567a628\") " pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.969872 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59334901-9cf4-47a8-bdd6-bd5d1567a628-catalog-content\") pod \"redhat-marketplace-vhm58\" (UID: \"59334901-9cf4-47a8-bdd6-bd5d1567a628\") " pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.987359 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 03:17:14 crc kubenswrapper[4746]: I0103 03:17:14.988247 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbt6v\" (UniqueName: \"kubernetes.io/projected/59334901-9cf4-47a8-bdd6-bd5d1567a628-kube-api-access-cbt6v\") pod \"redhat-marketplace-vhm58\" (UID: \"59334901-9cf4-47a8-bdd6-bd5d1567a628\") " pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.089601 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.104325 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.180104 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8xgdf"] Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.181227 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.191428 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xgdf"] Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.195901 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.196500 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.197529 4746 patch_prober.go:28] interesting pod/console-f9d7485db-fws24 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.197590 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fws24" podUID="e0f102bc-480f-4c8f-b3e3-7afa141e912c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.273071 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-utilities\") pod \"redhat-marketplace-8xgdf\" (UID: \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\") " pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.273256 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg4f7\" (UniqueName: \"kubernetes.io/projected/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-kube-api-access-dg4f7\") pod \"redhat-marketplace-8xgdf\" (UID: \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\") " pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.273327 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-catalog-content\") pod \"redhat-marketplace-8xgdf\" (UID: \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\") " pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.292127 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" event={"ID":"d3da68b1-7a82-4adc-81ae-d9edc00d3c32","Type":"ContainerStarted","Data":"6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a"} Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.292902 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.307084 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-j9fzb" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.343746 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" podStartSLOduration=136.343721212 podStartE2EDuration="2m16.343721212s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:15.315846555 +0000 UTC m=+155.165736880" watchObservedRunningTime="2026-01-03 03:17:15.343721212 +0000 UTC m=+155.193611517" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.381048 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-utilities\") pod \"redhat-marketplace-8xgdf\" (UID: \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\") " pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.381136 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg4f7\" (UniqueName: \"kubernetes.io/projected/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-kube-api-access-dg4f7\") pod \"redhat-marketplace-8xgdf\" (UID: \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\") " pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.381188 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-catalog-content\") pod \"redhat-marketplace-8xgdf\" (UID: \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\") " pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.393113 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-catalog-content\") pod \"redhat-marketplace-8xgdf\" (UID: \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\") " pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.393245 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-utilities\") pod \"redhat-marketplace-8xgdf\" (UID: \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\") " pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.422620 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg4f7\" (UniqueName: \"kubernetes.io/projected/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-kube-api-access-dg4f7\") pod \"redhat-marketplace-8xgdf\" (UID: \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\") " pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.450809 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhm58"] Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.541831 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.696854 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.809909 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.917684 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8zlc\" (UniqueName: \"kubernetes.io/projected/15a9ed2e-64d7-4917-a5fc-857b75246dd7-kube-api-access-c8zlc\") pod \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\" (UID: \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\") " Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.918205 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15a9ed2e-64d7-4917-a5fc-857b75246dd7-config-volume\") pod \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\" (UID: \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\") " Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.918246 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15a9ed2e-64d7-4917-a5fc-857b75246dd7-secret-volume\") pod \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\" (UID: \"15a9ed2e-64d7-4917-a5fc-857b75246dd7\") " Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.919641 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a9ed2e-64d7-4917-a5fc-857b75246dd7-config-volume" (OuterVolumeSpecName: "config-volume") pod "15a9ed2e-64d7-4917-a5fc-857b75246dd7" (UID: "15a9ed2e-64d7-4917-a5fc-857b75246dd7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.928062 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a9ed2e-64d7-4917-a5fc-857b75246dd7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "15a9ed2e-64d7-4917-a5fc-857b75246dd7" (UID: "15a9ed2e-64d7-4917-a5fc-857b75246dd7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.928225 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a9ed2e-64d7-4917-a5fc-857b75246dd7-kube-api-access-c8zlc" (OuterVolumeSpecName: "kube-api-access-c8zlc") pod "15a9ed2e-64d7-4917-a5fc-857b75246dd7" (UID: "15a9ed2e-64d7-4917-a5fc-857b75246dd7"). InnerVolumeSpecName "kube-api-access-c8zlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.959278 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.981830 4746 patch_prober.go:28] interesting pod/router-default-5444994796-k57gl container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 03 03:17:15 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Jan 03 03:17:15 crc kubenswrapper[4746]: [+]process-running ok Jan 03 03:17:15 crc kubenswrapper[4746]: healthz check failed Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.982016 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k57gl" podUID="caccb049-c5ee-4d59-8b3e-1c4f54b81f10" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.995458 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-87hhg"] Jan 03 03:17:15 crc kubenswrapper[4746]: E0103 03:17:15.996031 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a9ed2e-64d7-4917-a5fc-857b75246dd7" containerName="collect-profiles" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.996051 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a9ed2e-64d7-4917-a5fc-857b75246dd7" containerName="collect-profiles" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.996226 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a9ed2e-64d7-4917-a5fc-857b75246dd7" containerName="collect-profiles" Jan 03 03:17:15 crc kubenswrapper[4746]: I0103 03:17:15.997289 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.000503 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.011439 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87hhg"] Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.020413 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cefe73c-d6d3-4428-af09-33abe2c70156-catalog-content\") pod \"redhat-operators-87hhg\" (UID: \"6cefe73c-d6d3-4428-af09-33abe2c70156\") " pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.020492 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cefe73c-d6d3-4428-af09-33abe2c70156-utilities\") pod \"redhat-operators-87hhg\" (UID: \"6cefe73c-d6d3-4428-af09-33abe2c70156\") " pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.020516 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rrh\" (UniqueName: \"kubernetes.io/projected/6cefe73c-d6d3-4428-af09-33abe2c70156-kube-api-access-l8rrh\") pod \"redhat-operators-87hhg\" (UID: \"6cefe73c-d6d3-4428-af09-33abe2c70156\") " pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.020671 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8zlc\" (UniqueName: \"kubernetes.io/projected/15a9ed2e-64d7-4917-a5fc-857b75246dd7-kube-api-access-c8zlc\") on node \"crc\" DevicePath \"\"" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.020685 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15a9ed2e-64d7-4917-a5fc-857b75246dd7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.020697 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15a9ed2e-64d7-4917-a5fc-857b75246dd7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.122788 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cefe73c-d6d3-4428-af09-33abe2c70156-catalog-content\") pod \"redhat-operators-87hhg\" (UID: \"6cefe73c-d6d3-4428-af09-33abe2c70156\") " pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.122843 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cefe73c-d6d3-4428-af09-33abe2c70156-utilities\") pod \"redhat-operators-87hhg\" (UID: \"6cefe73c-d6d3-4428-af09-33abe2c70156\") " pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.122872 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8rrh\" (UniqueName: \"kubernetes.io/projected/6cefe73c-d6d3-4428-af09-33abe2c70156-kube-api-access-l8rrh\") pod \"redhat-operators-87hhg\" (UID: \"6cefe73c-d6d3-4428-af09-33abe2c70156\") " pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.123264 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cefe73c-d6d3-4428-af09-33abe2c70156-catalog-content\") pod \"redhat-operators-87hhg\" (UID: \"6cefe73c-d6d3-4428-af09-33abe2c70156\") " pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.123369 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cefe73c-d6d3-4428-af09-33abe2c70156-utilities\") pod \"redhat-operators-87hhg\" (UID: \"6cefe73c-d6d3-4428-af09-33abe2c70156\") " pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.169040 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8rrh\" (UniqueName: \"kubernetes.io/projected/6cefe73c-d6d3-4428-af09-33abe2c70156-kube-api-access-l8rrh\") pod \"redhat-operators-87hhg\" (UID: \"6cefe73c-d6d3-4428-af09-33abe2c70156\") " pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.285686 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xgdf"] Jan 03 03:17:16 crc kubenswrapper[4746]: W0103 03:17:16.308947 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59b72a40_0fdf_4e53_aa91_95c8dd8f918c.slice/crio-8f628cd2ff1e8b615aa638ff8cd9adf0b6d8790d33d131175e3206bd226b971e WatchSource:0}: Error finding container 8f628cd2ff1e8b615aa638ff8cd9adf0b6d8790d33d131175e3206bd226b971e: Status 404 returned error can't find the container with id 8f628cd2ff1e8b615aa638ff8cd9adf0b6d8790d33d131175e3206bd226b971e Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.309640 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" event={"ID":"15a9ed2e-64d7-4917-a5fc-857b75246dd7","Type":"ContainerDied","Data":"fc5240ee1a53935355575fb44729ccac439aff498933a74f6ecbcbf49fd1057e"} Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.309686 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc5240ee1a53935355575fb44729ccac439aff498933a74f6ecbcbf49fd1057e" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.309747 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456835-bvq5m" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.321493 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.327535 4746 generic.go:334] "Generic (PLEG): container finished" podID="59334901-9cf4-47a8-bdd6-bd5d1567a628" containerID="32433c9bb7f5948d062d5bee23a6d7b59e6a3ba4f64de06b8b8fd70e85585ada" exitCode=0 Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.328343 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhm58" event={"ID":"59334901-9cf4-47a8-bdd6-bd5d1567a628","Type":"ContainerDied","Data":"32433c9bb7f5948d062d5bee23a6d7b59e6a3ba4f64de06b8b8fd70e85585ada"} Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.328383 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhm58" event={"ID":"59334901-9cf4-47a8-bdd6-bd5d1567a628","Type":"ContainerStarted","Data":"8e0f5e4dff989411a04feed2ab61271770b96cb0e90fdb8e01597358a3399660"} Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.331769 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e","Type":"ContainerStarted","Data":"01a8a597c97b9c7c43cc085722cfd95b15225f03eb7512ac2e55cb9d6a753ee7"} Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.389489 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zr2zs"] Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.390908 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.483993 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zr2zs"] Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.533527 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4be3b87-9327-4284-87d3-eef1752a461c-utilities\") pod \"redhat-operators-zr2zs\" (UID: \"b4be3b87-9327-4284-87d3-eef1752a461c\") " pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.533759 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f8xm\" (UniqueName: \"kubernetes.io/projected/b4be3b87-9327-4284-87d3-eef1752a461c-kube-api-access-4f8xm\") pod \"redhat-operators-zr2zs\" (UID: \"b4be3b87-9327-4284-87d3-eef1752a461c\") " pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.533793 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4be3b87-9327-4284-87d3-eef1752a461c-catalog-content\") pod \"redhat-operators-zr2zs\" (UID: \"b4be3b87-9327-4284-87d3-eef1752a461c\") " pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.635907 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f8xm\" (UniqueName: \"kubernetes.io/projected/b4be3b87-9327-4284-87d3-eef1752a461c-kube-api-access-4f8xm\") pod \"redhat-operators-zr2zs\" (UID: \"b4be3b87-9327-4284-87d3-eef1752a461c\") " pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.635969 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4be3b87-9327-4284-87d3-eef1752a461c-catalog-content\") pod \"redhat-operators-zr2zs\" (UID: \"b4be3b87-9327-4284-87d3-eef1752a461c\") " pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.636061 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4be3b87-9327-4284-87d3-eef1752a461c-utilities\") pod \"redhat-operators-zr2zs\" (UID: \"b4be3b87-9327-4284-87d3-eef1752a461c\") " pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.636941 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4be3b87-9327-4284-87d3-eef1752a461c-catalog-content\") pod \"redhat-operators-zr2zs\" (UID: \"b4be3b87-9327-4284-87d3-eef1752a461c\") " pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.637234 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4be3b87-9327-4284-87d3-eef1752a461c-utilities\") pod \"redhat-operators-zr2zs\" (UID: \"b4be3b87-9327-4284-87d3-eef1752a461c\") " pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.675868 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f8xm\" (UniqueName: \"kubernetes.io/projected/b4be3b87-9327-4284-87d3-eef1752a461c-kube-api-access-4f8xm\") pod \"redhat-operators-zr2zs\" (UID: \"b4be3b87-9327-4284-87d3-eef1752a461c\") " pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.718403 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87hhg"] Jan 03 03:17:16 crc kubenswrapper[4746]: W0103 03:17:16.738340 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cefe73c_d6d3_4428_af09_33abe2c70156.slice/crio-478f3795183b4e0a2711dd59abe4ad5af0dafb8cefc9f7d7a6d54b7c27759d2e WatchSource:0}: Error finding container 478f3795183b4e0a2711dd59abe4ad5af0dafb8cefc9f7d7a6d54b7c27759d2e: Status 404 returned error can't find the container with id 478f3795183b4e0a2711dd59abe4ad5af0dafb8cefc9f7d7a6d54b7c27759d2e Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.806318 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.955983 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:16 crc kubenswrapper[4746]: I0103 03:17:16.958532 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-k57gl" Jan 03 03:17:17 crc kubenswrapper[4746]: I0103 03:17:17.142225 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zr2zs"] Jan 03 03:17:17 crc kubenswrapper[4746]: W0103 03:17:17.177588 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4be3b87_9327_4284_87d3_eef1752a461c.slice/crio-564543c3ea7d24ce1be3307df8b4fbbe92340d4e0a11eea8d150c90491fed8cb WatchSource:0}: Error finding container 564543c3ea7d24ce1be3307df8b4fbbe92340d4e0a11eea8d150c90491fed8cb: Status 404 returned error can't find the container with id 564543c3ea7d24ce1be3307df8b4fbbe92340d4e0a11eea8d150c90491fed8cb Jan 03 03:17:17 crc kubenswrapper[4746]: I0103 03:17:17.355185 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr2zs" event={"ID":"b4be3b87-9327-4284-87d3-eef1752a461c","Type":"ContainerStarted","Data":"564543c3ea7d24ce1be3307df8b4fbbe92340d4e0a11eea8d150c90491fed8cb"} Jan 03 03:17:17 crc kubenswrapper[4746]: I0103 03:17:17.368953 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87hhg" event={"ID":"6cefe73c-d6d3-4428-af09-33abe2c70156","Type":"ContainerStarted","Data":"478f3795183b4e0a2711dd59abe4ad5af0dafb8cefc9f7d7a6d54b7c27759d2e"} Jan 03 03:17:17 crc kubenswrapper[4746]: I0103 03:17:17.385534 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xgdf" event={"ID":"59b72a40-0fdf-4e53-aa91-95c8dd8f918c","Type":"ContainerStarted","Data":"8f628cd2ff1e8b615aa638ff8cd9adf0b6d8790d33d131175e3206bd226b971e"} Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.398778 4746 generic.go:334] "Generic (PLEG): container finished" podID="59b72a40-0fdf-4e53-aa91-95c8dd8f918c" containerID="7ed1011da91a64c18c80c9a88c79fed7b158c4a53d9311b470881b52b59675c6" exitCode=0 Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.399492 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xgdf" event={"ID":"59b72a40-0fdf-4e53-aa91-95c8dd8f918c","Type":"ContainerDied","Data":"7ed1011da91a64c18c80c9a88c79fed7b158c4a53d9311b470881b52b59675c6"} Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.407982 4746 generic.go:334] "Generic (PLEG): container finished" podID="b4be3b87-9327-4284-87d3-eef1752a461c" containerID="d996d2320ff9000aadbb8ba142f61b2db994bfe323b35f6caa65c935d67310d0" exitCode=0 Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.408049 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr2zs" event={"ID":"b4be3b87-9327-4284-87d3-eef1752a461c","Type":"ContainerDied","Data":"d996d2320ff9000aadbb8ba142f61b2db994bfe323b35f6caa65c935d67310d0"} Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.411822 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e","Type":"ContainerStarted","Data":"0ba4aa71dba829f32f30fdd15d96ded91f0cb1d6f0b79dc061bd40237f8867bd"} Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.417680 4746 generic.go:334] "Generic (PLEG): container finished" podID="6cefe73c-d6d3-4428-af09-33abe2c70156" containerID="5e52864582898856b6051ba79065af3d7b3dc8e434f3c7dff78be191b9e11993" exitCode=0 Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.417716 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87hhg" event={"ID":"6cefe73c-d6d3-4428-af09-33abe2c70156","Type":"ContainerDied","Data":"5e52864582898856b6051ba79065af3d7b3dc8e434f3c7dff78be191b9e11993"} Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.495553 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.495520368 podStartE2EDuration="4.495520368s" podCreationTimestamp="2026-01-03 03:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:18.469610838 +0000 UTC m=+158.319501163" watchObservedRunningTime="2026-01-03 03:17:18.495520368 +0000 UTC m=+158.345410673" Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.637814 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.638488 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.640486 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.642008 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.645686 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.775804 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14289e50-8aac-456a-a487-2530a47f90de-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"14289e50-8aac-456a-a487-2530a47f90de\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.776032 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14289e50-8aac-456a-a487-2530a47f90de-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"14289e50-8aac-456a-a487-2530a47f90de\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.877857 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14289e50-8aac-456a-a487-2530a47f90de-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"14289e50-8aac-456a-a487-2530a47f90de\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.877963 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14289e50-8aac-456a-a487-2530a47f90de-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"14289e50-8aac-456a-a487-2530a47f90de\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.877988 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14289e50-8aac-456a-a487-2530a47f90de-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"14289e50-8aac-456a-a487-2530a47f90de\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 03:17:18 crc kubenswrapper[4746]: I0103 03:17:18.896986 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14289e50-8aac-456a-a487-2530a47f90de-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"14289e50-8aac-456a-a487-2530a47f90de\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 03:17:19 crc kubenswrapper[4746]: I0103 03:17:19.038186 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 03:17:19 crc kubenswrapper[4746]: I0103 03:17:19.302254 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 03 03:17:19 crc kubenswrapper[4746]: W0103 03:17:19.345248 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod14289e50_8aac_456a_a487_2530a47f90de.slice/crio-0eb807e14724637e3ea89614c8dc51b42ff5471bb3b6e837bf5d163cd919c8d8 WatchSource:0}: Error finding container 0eb807e14724637e3ea89614c8dc51b42ff5471bb3b6e837bf5d163cd919c8d8: Status 404 returned error can't find the container with id 0eb807e14724637e3ea89614c8dc51b42ff5471bb3b6e837bf5d163cd919c8d8 Jan 03 03:17:19 crc kubenswrapper[4746]: I0103 03:17:19.430935 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"14289e50-8aac-456a-a487-2530a47f90de","Type":"ContainerStarted","Data":"0eb807e14724637e3ea89614c8dc51b42ff5471bb3b6e837bf5d163cd919c8d8"} Jan 03 03:17:19 crc kubenswrapper[4746]: I0103 03:17:19.435205 4746 generic.go:334] "Generic (PLEG): container finished" podID="b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e" containerID="0ba4aa71dba829f32f30fdd15d96ded91f0cb1d6f0b79dc061bd40237f8867bd" exitCode=0 Jan 03 03:17:19 crc kubenswrapper[4746]: I0103 03:17:19.435242 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e","Type":"ContainerDied","Data":"0ba4aa71dba829f32f30fdd15d96ded91f0cb1d6f0b79dc061bd40237f8867bd"} Jan 03 03:17:20 crc kubenswrapper[4746]: I0103 03:17:20.450395 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"14289e50-8aac-456a-a487-2530a47f90de","Type":"ContainerStarted","Data":"146f21613f9ba8d1e43c44cc01fa715ff1b4b1faed1953d0efdc74c20fc2610c"} Jan 03 03:17:20 crc kubenswrapper[4746]: I0103 03:17:20.481583 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.481561631 podStartE2EDuration="2.481561631s" podCreationTimestamp="2026-01-03 03:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:20.474087432 +0000 UTC m=+160.323977757" watchObservedRunningTime="2026-01-03 03:17:20.481561631 +0000 UTC m=+160.331451936" Jan 03 03:17:20 crc kubenswrapper[4746]: I0103 03:17:20.750921 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:17:20 crc kubenswrapper[4746]: I0103 03:17:20.790233 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 03:17:20 crc kubenswrapper[4746]: I0103 03:17:20.918706 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e-kubelet-dir\") pod \"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e\" (UID: \"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e\") " Jan 03 03:17:20 crc kubenswrapper[4746]: I0103 03:17:20.918765 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e-kube-api-access\") pod \"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e\" (UID: \"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e\") " Jan 03 03:17:20 crc kubenswrapper[4746]: I0103 03:17:20.919848 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e" (UID: "b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:17:20 crc kubenswrapper[4746]: I0103 03:17:20.925812 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e" (UID: "b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.021751 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.021869 4746 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.021887 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.033591 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28a574f3-8744-4d57-aada-e4b328244e19-metrics-certs\") pod \"network-metrics-daemon-57tv2\" (UID: \"28a574f3-8744-4d57-aada-e4b328244e19\") " pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.072351 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-t84km" Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.130249 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-57tv2" Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.379743 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-57tv2"] Jan 03 03:17:21 crc kubenswrapper[4746]: W0103 03:17:21.389696 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28a574f3_8744_4d57_aada_e4b328244e19.slice/crio-c1a61df8c5c15ee491ab09c45575aa39dab85927b75b1232d16105154c31ba8a WatchSource:0}: Error finding container c1a61df8c5c15ee491ab09c45575aa39dab85927b75b1232d16105154c31ba8a: Status 404 returned error can't find the container with id c1a61df8c5c15ee491ab09c45575aa39dab85927b75b1232d16105154c31ba8a Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.459247 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-57tv2" event={"ID":"28a574f3-8744-4d57-aada-e4b328244e19","Type":"ContainerStarted","Data":"c1a61df8c5c15ee491ab09c45575aa39dab85927b75b1232d16105154c31ba8a"} Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.462940 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e","Type":"ContainerDied","Data":"01a8a597c97b9c7c43cc085722cfd95b15225f03eb7512ac2e55cb9d6a753ee7"} Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.462988 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01a8a597c97b9c7c43cc085722cfd95b15225f03eb7512ac2e55cb9d6a753ee7" Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.463075 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.465260 4746 generic.go:334] "Generic (PLEG): container finished" podID="14289e50-8aac-456a-a487-2530a47f90de" containerID="146f21613f9ba8d1e43c44cc01fa715ff1b4b1faed1953d0efdc74c20fc2610c" exitCode=0 Jan 03 03:17:21 crc kubenswrapper[4746]: I0103 03:17:21.465303 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"14289e50-8aac-456a-a487-2530a47f90de","Type":"ContainerDied","Data":"146f21613f9ba8d1e43c44cc01fa715ff1b4b1faed1953d0efdc74c20fc2610c"} Jan 03 03:17:22 crc kubenswrapper[4746]: I0103 03:17:22.475285 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-57tv2" event={"ID":"28a574f3-8744-4d57-aada-e4b328244e19","Type":"ContainerStarted","Data":"f7b57ac7fd01279d303c05e70b00636d8eca4da90c70af814a7fac0fc034ee13"} Jan 03 03:17:24 crc kubenswrapper[4746]: I0103 03:17:24.692170 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 03:17:24 crc kubenswrapper[4746]: I0103 03:17:24.789076 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14289e50-8aac-456a-a487-2530a47f90de-kubelet-dir\") pod \"14289e50-8aac-456a-a487-2530a47f90de\" (UID: \"14289e50-8aac-456a-a487-2530a47f90de\") " Jan 03 03:17:24 crc kubenswrapper[4746]: I0103 03:17:24.789150 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14289e50-8aac-456a-a487-2530a47f90de-kube-api-access\") pod \"14289e50-8aac-456a-a487-2530a47f90de\" (UID: \"14289e50-8aac-456a-a487-2530a47f90de\") " Jan 03 03:17:24 crc kubenswrapper[4746]: I0103 03:17:24.789198 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14289e50-8aac-456a-a487-2530a47f90de-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "14289e50-8aac-456a-a487-2530a47f90de" (UID: "14289e50-8aac-456a-a487-2530a47f90de"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:17:24 crc kubenswrapper[4746]: I0103 03:17:24.789809 4746 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14289e50-8aac-456a-a487-2530a47f90de-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 03 03:17:24 crc kubenswrapper[4746]: I0103 03:17:24.795299 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14289e50-8aac-456a-a487-2530a47f90de-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "14289e50-8aac-456a-a487-2530a47f90de" (UID: "14289e50-8aac-456a-a487-2530a47f90de"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:17:24 crc kubenswrapper[4746]: I0103 03:17:24.891362 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14289e50-8aac-456a-a487-2530a47f90de-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 03:17:25 crc kubenswrapper[4746]: I0103 03:17:25.198111 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:25 crc kubenswrapper[4746]: I0103 03:17:25.205325 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fws24" Jan 03 03:17:25 crc kubenswrapper[4746]: I0103 03:17:25.489759 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 03 03:17:25 crc kubenswrapper[4746]: I0103 03:17:25.491712 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"14289e50-8aac-456a-a487-2530a47f90de","Type":"ContainerDied","Data":"0eb807e14724637e3ea89614c8dc51b42ff5471bb3b6e837bf5d163cd919c8d8"} Jan 03 03:17:25 crc kubenswrapper[4746]: I0103 03:17:25.491775 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eb807e14724637e3ea89614c8dc51b42ff5471bb3b6e837bf5d163cd919c8d8" Jan 03 03:17:31 crc kubenswrapper[4746]: I0103 03:17:31.373994 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:17:31 crc kubenswrapper[4746]: I0103 03:17:31.374491 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:17:33 crc kubenswrapper[4746]: I0103 03:17:33.982298 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:17:42 crc kubenswrapper[4746]: E0103 03:17:42.952233 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 03 03:17:42 crc kubenswrapper[4746]: E0103 03:17:42.953166 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fljzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s9pk6_openshift-marketplace(ec6b70ea-cb0f-4bff-a489-69a988a0db5f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 03:17:42 crc kubenswrapper[4746]: E0103 03:17:42.954346 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-s9pk6" podUID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" Jan 03 03:17:45 crc kubenswrapper[4746]: I0103 03:17:45.995868 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4rdkk" Jan 03 03:17:46 crc kubenswrapper[4746]: I0103 03:17:46.857004 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 03 03:17:49 crc kubenswrapper[4746]: E0103 03:17:49.898402 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 03 03:17:49 crc kubenswrapper[4746]: E0103 03:17:49.899433 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8rrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-87hhg_openshift-marketplace(6cefe73c-d6d3-4428-af09-33abe2c70156): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 03:17:49 crc kubenswrapper[4746]: E0103 03:17:49.900781 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-87hhg" podUID="6cefe73c-d6d3-4428-af09-33abe2c70156" Jan 03 03:17:52 crc kubenswrapper[4746]: E0103 03:17:52.020098 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-87hhg" podUID="6cefe73c-d6d3-4428-af09-33abe2c70156" Jan 03 03:17:52 crc kubenswrapper[4746]: E0103 03:17:52.044043 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 03 03:17:52 crc kubenswrapper[4746]: E0103 03:17:52.044379 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4f8xm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zr2zs_openshift-marketplace(b4be3b87-9327-4284-87d3-eef1752a461c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 03 03:17:52 crc kubenswrapper[4746]: E0103 03:17:52.045668 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zr2zs" podUID="b4be3b87-9327-4284-87d3-eef1752a461c" Jan 03 03:17:52 crc kubenswrapper[4746]: E0103 03:17:52.052850 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s9pk6" podUID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" Jan 03 03:17:52 crc kubenswrapper[4746]: I0103 03:17:52.639784 4746 generic.go:334] "Generic (PLEG): container finished" podID="739b93d8-31f7-4ba5-861f-1e0579358067" containerID="3464520285c44ff5cd1eb3bad2a107aaee464499854407215a8e949e08c59ccf" exitCode=0 Jan 03 03:17:52 crc kubenswrapper[4746]: I0103 03:17:52.639904 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l57js" event={"ID":"739b93d8-31f7-4ba5-861f-1e0579358067","Type":"ContainerDied","Data":"3464520285c44ff5cd1eb3bad2a107aaee464499854407215a8e949e08c59ccf"} Jan 03 03:17:52 crc kubenswrapper[4746]: I0103 03:17:52.643546 4746 generic.go:334] "Generic (PLEG): container finished" podID="59b72a40-0fdf-4e53-aa91-95c8dd8f918c" containerID="5f5e488ac2e6d739865084e2e5281af253c91f63f882231b0976a7b8776b55dd" exitCode=0 Jan 03 03:17:52 crc kubenswrapper[4746]: I0103 03:17:52.643606 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xgdf" event={"ID":"59b72a40-0fdf-4e53-aa91-95c8dd8f918c","Type":"ContainerDied","Data":"5f5e488ac2e6d739865084e2e5281af253c91f63f882231b0976a7b8776b55dd"} Jan 03 03:17:52 crc kubenswrapper[4746]: I0103 03:17:52.654644 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-57tv2" event={"ID":"28a574f3-8744-4d57-aada-e4b328244e19","Type":"ContainerStarted","Data":"3f6a79cfea7c5a7fd4753fd7cd86606af31a7ed7a2c3534271c81fb2cecd42dd"} Jan 03 03:17:52 crc kubenswrapper[4746]: I0103 03:17:52.658448 4746 generic.go:334] "Generic (PLEG): container finished" podID="59334901-9cf4-47a8-bdd6-bd5d1567a628" containerID="9248cf17112f2cf3bf99e40f541bcc957149423376aa2f30d4c67d5599192e7a" exitCode=0 Jan 03 03:17:52 crc kubenswrapper[4746]: I0103 03:17:52.658500 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhm58" event={"ID":"59334901-9cf4-47a8-bdd6-bd5d1567a628","Type":"ContainerDied","Data":"9248cf17112f2cf3bf99e40f541bcc957149423376aa2f30d4c67d5599192e7a"} Jan 03 03:17:52 crc kubenswrapper[4746]: I0103 03:17:52.660882 4746 generic.go:334] "Generic (PLEG): container finished" podID="10fd97dc-b59e-4136-9b1e-2084eee07a32" containerID="fbb3e70629332741df64304541dad51475509663c6774cfd340ac016d529e65c" exitCode=0 Jan 03 03:17:52 crc kubenswrapper[4746]: I0103 03:17:52.660922 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdjl4" event={"ID":"10fd97dc-b59e-4136-9b1e-2084eee07a32","Type":"ContainerDied","Data":"fbb3e70629332741df64304541dad51475509663c6774cfd340ac016d529e65c"} Jan 03 03:17:52 crc kubenswrapper[4746]: I0103 03:17:52.666354 4746 generic.go:334] "Generic (PLEG): container finished" podID="c7e2ce03-275f-447c-bf55-f915ece6d479" containerID="fb57b03baaff24634f0fc39d694a61ab55a9245edb70bec92de106f72165f70c" exitCode=0 Jan 03 03:17:52 crc kubenswrapper[4746]: I0103 03:17:52.667485 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nssxg" event={"ID":"c7e2ce03-275f-447c-bf55-f915ece6d479","Type":"ContainerDied","Data":"fb57b03baaff24634f0fc39d694a61ab55a9245edb70bec92de106f72165f70c"} Jan 03 03:17:52 crc kubenswrapper[4746]: E0103 03:17:52.668259 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zr2zs" podUID="b4be3b87-9327-4284-87d3-eef1752a461c" Jan 03 03:17:52 crc kubenswrapper[4746]: I0103 03:17:52.677082 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-57tv2" podStartSLOduration=173.677062475 podStartE2EDuration="2m53.677062475s" podCreationTimestamp="2026-01-03 03:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:52.672056535 +0000 UTC m=+192.521946840" watchObservedRunningTime="2026-01-03 03:17:52.677062475 +0000 UTC m=+192.526952780" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.442150 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 03 03:17:53 crc kubenswrapper[4746]: E0103 03:17:53.442566 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14289e50-8aac-456a-a487-2530a47f90de" containerName="pruner" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.442578 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="14289e50-8aac-456a-a487-2530a47f90de" containerName="pruner" Jan 03 03:17:53 crc kubenswrapper[4746]: E0103 03:17:53.442586 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e" containerName="pruner" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.442591 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e" containerName="pruner" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.442751 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81b3d31-f98e-4a61-88b1-9fe4a69f1a0e" containerName="pruner" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.442769 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="14289e50-8aac-456a-a487-2530a47f90de" containerName="pruner" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.443111 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.448399 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.448559 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.453551 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.518723 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3be6ac1d-8e15-4d6d-8419-6036a66324b3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3be6ac1d-8e15-4d6d-8419-6036a66324b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.518786 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3be6ac1d-8e15-4d6d-8419-6036a66324b3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3be6ac1d-8e15-4d6d-8419-6036a66324b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.620057 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3be6ac1d-8e15-4d6d-8419-6036a66324b3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3be6ac1d-8e15-4d6d-8419-6036a66324b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.620115 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3be6ac1d-8e15-4d6d-8419-6036a66324b3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3be6ac1d-8e15-4d6d-8419-6036a66324b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.620170 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3be6ac1d-8e15-4d6d-8419-6036a66324b3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3be6ac1d-8e15-4d6d-8419-6036a66324b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.642178 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3be6ac1d-8e15-4d6d-8419-6036a66324b3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3be6ac1d-8e15-4d6d-8419-6036a66324b3\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.672228 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdjl4" event={"ID":"10fd97dc-b59e-4136-9b1e-2084eee07a32","Type":"ContainerStarted","Data":"051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca"} Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.674835 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nssxg" event={"ID":"c7e2ce03-275f-447c-bf55-f915ece6d479","Type":"ContainerStarted","Data":"57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938"} Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.678842 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l57js" event={"ID":"739b93d8-31f7-4ba5-861f-1e0579358067","Type":"ContainerStarted","Data":"6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0"} Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.688888 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xgdf" event={"ID":"59b72a40-0fdf-4e53-aa91-95c8dd8f918c","Type":"ContainerStarted","Data":"ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee"} Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.691401 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zdjl4" podStartSLOduration=1.800590895 podStartE2EDuration="40.691386155s" podCreationTimestamp="2026-01-03 03:17:13 +0000 UTC" firstStartedPulling="2026-01-03 03:17:14.216737365 +0000 UTC m=+154.066627670" lastFinishedPulling="2026-01-03 03:17:53.107532625 +0000 UTC m=+192.957422930" observedRunningTime="2026-01-03 03:17:53.688787133 +0000 UTC m=+193.538677438" watchObservedRunningTime="2026-01-03 03:17:53.691386155 +0000 UTC m=+193.541276460" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.707402 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhm58" event={"ID":"59334901-9cf4-47a8-bdd6-bd5d1567a628","Type":"ContainerStarted","Data":"0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b"} Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.713879 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nssxg" podStartSLOduration=2.772137796 podStartE2EDuration="41.713859913s" podCreationTimestamp="2026-01-03 03:17:12 +0000 UTC" firstStartedPulling="2026-01-03 03:17:14.200878727 +0000 UTC m=+154.050769032" lastFinishedPulling="2026-01-03 03:17:53.142600844 +0000 UTC m=+192.992491149" observedRunningTime="2026-01-03 03:17:53.711546487 +0000 UTC m=+193.561436792" watchObservedRunningTime="2026-01-03 03:17:53.713859913 +0000 UTC m=+193.563750218" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.732745 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l57js" podStartSLOduration=2.717984628 podStartE2EDuration="41.732724624s" podCreationTimestamp="2026-01-03 03:17:12 +0000 UTC" firstStartedPulling="2026-01-03 03:17:14.2148623 +0000 UTC m=+154.064752605" lastFinishedPulling="2026-01-03 03:17:53.229602296 +0000 UTC m=+193.079492601" observedRunningTime="2026-01-03 03:17:53.72794561 +0000 UTC m=+193.577835925" watchObservedRunningTime="2026-01-03 03:17:53.732724624 +0000 UTC m=+193.582614929" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.747870 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vhm58" podStartSLOduration=2.76585261 podStartE2EDuration="39.747852216s" podCreationTimestamp="2026-01-03 03:17:14 +0000 UTC" firstStartedPulling="2026-01-03 03:17:16.34747488 +0000 UTC m=+156.197365185" lastFinishedPulling="2026-01-03 03:17:53.329474476 +0000 UTC m=+193.179364791" observedRunningTime="2026-01-03 03:17:53.745469329 +0000 UTC m=+193.595359634" watchObservedRunningTime="2026-01-03 03:17:53.747852216 +0000 UTC m=+193.597742521" Jan 03 03:17:53 crc kubenswrapper[4746]: I0103 03:17:53.778465 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 03:17:54 crc kubenswrapper[4746]: I0103 03:17:54.249840 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8xgdf" podStartSLOduration=4.621203652 podStartE2EDuration="39.249821397s" podCreationTimestamp="2026-01-03 03:17:15 +0000 UTC" firstStartedPulling="2026-01-03 03:17:18.412325917 +0000 UTC m=+158.262216222" lastFinishedPulling="2026-01-03 03:17:53.040943662 +0000 UTC m=+192.890833967" observedRunningTime="2026-01-03 03:17:53.767145988 +0000 UTC m=+193.617036313" watchObservedRunningTime="2026-01-03 03:17:54.249821397 +0000 UTC m=+194.099711702" Jan 03 03:17:54 crc kubenswrapper[4746]: I0103 03:17:54.251020 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 03 03:17:54 crc kubenswrapper[4746]: W0103 03:17:54.258929 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3be6ac1d_8e15_4d6d_8419_6036a66324b3.slice/crio-5a3b8277fd4f3011400f53e8f76833ff6f823acd42b20de04edee0bc73a87505 WatchSource:0}: Error finding container 5a3b8277fd4f3011400f53e8f76833ff6f823acd42b20de04edee0bc73a87505: Status 404 returned error can't find the container with id 5a3b8277fd4f3011400f53e8f76833ff6f823acd42b20de04edee0bc73a87505 Jan 03 03:17:54 crc kubenswrapper[4746]: I0103 03:17:54.717556 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3be6ac1d-8e15-4d6d-8419-6036a66324b3","Type":"ContainerStarted","Data":"e7c8bbb8c228bb5b33a5fcb6c12f420d3f8ef091e8960b4f6c3d192e7622ae70"} Jan 03 03:17:54 crc kubenswrapper[4746]: I0103 03:17:54.717590 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3be6ac1d-8e15-4d6d-8419-6036a66324b3","Type":"ContainerStarted","Data":"5a3b8277fd4f3011400f53e8f76833ff6f823acd42b20de04edee0bc73a87505"} Jan 03 03:17:55 crc kubenswrapper[4746]: I0103 03:17:55.105254 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:55 crc kubenswrapper[4746]: I0103 03:17:55.105508 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:17:55 crc kubenswrapper[4746]: I0103 03:17:55.542896 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:55 crc kubenswrapper[4746]: I0103 03:17:55.543180 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:55 crc kubenswrapper[4746]: I0103 03:17:55.591763 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:17:55 crc kubenswrapper[4746]: I0103 03:17:55.612076 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.6120615579999997 podStartE2EDuration="2.612061558s" podCreationTimestamp="2026-01-03 03:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:17:54.729560666 +0000 UTC m=+194.579450971" watchObservedRunningTime="2026-01-03 03:17:55.612061558 +0000 UTC m=+195.461951863" Jan 03 03:17:55 crc kubenswrapper[4746]: I0103 03:17:55.724286 4746 generic.go:334] "Generic (PLEG): container finished" podID="3be6ac1d-8e15-4d6d-8419-6036a66324b3" containerID="e7c8bbb8c228bb5b33a5fcb6c12f420d3f8ef091e8960b4f6c3d192e7622ae70" exitCode=0 Jan 03 03:17:55 crc kubenswrapper[4746]: I0103 03:17:55.725607 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3be6ac1d-8e15-4d6d-8419-6036a66324b3","Type":"ContainerDied","Data":"e7c8bbb8c228bb5b33a5fcb6c12f420d3f8ef091e8960b4f6c3d192e7622ae70"} Jan 03 03:17:56 crc kubenswrapper[4746]: I0103 03:17:56.188781 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-vhm58" podUID="59334901-9cf4-47a8-bdd6-bd5d1567a628" containerName="registry-server" probeResult="failure" output=< Jan 03 03:17:56 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Jan 03 03:17:56 crc kubenswrapper[4746]: > Jan 03 03:17:57 crc kubenswrapper[4746]: I0103 03:17:57.112709 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 03:17:57 crc kubenswrapper[4746]: I0103 03:17:57.174948 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3be6ac1d-8e15-4d6d-8419-6036a66324b3-kube-api-access\") pod \"3be6ac1d-8e15-4d6d-8419-6036a66324b3\" (UID: \"3be6ac1d-8e15-4d6d-8419-6036a66324b3\") " Jan 03 03:17:57 crc kubenswrapper[4746]: I0103 03:17:57.175108 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3be6ac1d-8e15-4d6d-8419-6036a66324b3-kubelet-dir\") pod \"3be6ac1d-8e15-4d6d-8419-6036a66324b3\" (UID: \"3be6ac1d-8e15-4d6d-8419-6036a66324b3\") " Jan 03 03:17:57 crc kubenswrapper[4746]: I0103 03:17:57.175166 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3be6ac1d-8e15-4d6d-8419-6036a66324b3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3be6ac1d-8e15-4d6d-8419-6036a66324b3" (UID: "3be6ac1d-8e15-4d6d-8419-6036a66324b3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:17:57 crc kubenswrapper[4746]: I0103 03:17:57.175415 4746 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3be6ac1d-8e15-4d6d-8419-6036a66324b3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 03 03:17:57 crc kubenswrapper[4746]: I0103 03:17:57.181689 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be6ac1d-8e15-4d6d-8419-6036a66324b3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3be6ac1d-8e15-4d6d-8419-6036a66324b3" (UID: "3be6ac1d-8e15-4d6d-8419-6036a66324b3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:17:57 crc kubenswrapper[4746]: I0103 03:17:57.277623 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3be6ac1d-8e15-4d6d-8419-6036a66324b3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 03:17:57 crc kubenswrapper[4746]: I0103 03:17:57.738105 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3be6ac1d-8e15-4d6d-8419-6036a66324b3","Type":"ContainerDied","Data":"5a3b8277fd4f3011400f53e8f76833ff6f823acd42b20de04edee0bc73a87505"} Jan 03 03:17:57 crc kubenswrapper[4746]: I0103 03:17:57.738164 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a3b8277fd4f3011400f53e8f76833ff6f823acd42b20de04edee0bc73a87505" Jan 03 03:17:57 crc kubenswrapper[4746]: I0103 03:17:57.738215 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.833506 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 03 03:18:00 crc kubenswrapper[4746]: E0103 03:18:00.834875 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be6ac1d-8e15-4d6d-8419-6036a66324b3" containerName="pruner" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.834895 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be6ac1d-8e15-4d6d-8419-6036a66324b3" containerName="pruner" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.835060 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be6ac1d-8e15-4d6d-8419-6036a66324b3" containerName="pruner" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.835792 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.836937 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25948708-1b20-4dd8-8073-ec76836c38d3-kube-api-access\") pod \"installer-9-crc\" (UID: \"25948708-1b20-4dd8-8073-ec76836c38d3\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.837019 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25948708-1b20-4dd8-8073-ec76836c38d3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"25948708-1b20-4dd8-8073-ec76836c38d3\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.837082 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/25948708-1b20-4dd8-8073-ec76836c38d3-var-lock\") pod \"installer-9-crc\" (UID: \"25948708-1b20-4dd8-8073-ec76836c38d3\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.840385 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.840470 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.857856 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.938534 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25948708-1b20-4dd8-8073-ec76836c38d3-kube-api-access\") pod \"installer-9-crc\" (UID: \"25948708-1b20-4dd8-8073-ec76836c38d3\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.938601 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25948708-1b20-4dd8-8073-ec76836c38d3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"25948708-1b20-4dd8-8073-ec76836c38d3\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.938650 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/25948708-1b20-4dd8-8073-ec76836c38d3-var-lock\") pod \"installer-9-crc\" (UID: \"25948708-1b20-4dd8-8073-ec76836c38d3\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.938745 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25948708-1b20-4dd8-8073-ec76836c38d3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"25948708-1b20-4dd8-8073-ec76836c38d3\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.938758 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/25948708-1b20-4dd8-8073-ec76836c38d3-var-lock\") pod \"installer-9-crc\" (UID: \"25948708-1b20-4dd8-8073-ec76836c38d3\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:00 crc kubenswrapper[4746]: I0103 03:18:00.960810 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25948708-1b20-4dd8-8073-ec76836c38d3-kube-api-access\") pod \"installer-9-crc\" (UID: \"25948708-1b20-4dd8-8073-ec76836c38d3\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:01 crc kubenswrapper[4746]: I0103 03:18:01.163574 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:01 crc kubenswrapper[4746]: I0103 03:18:01.374547 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:18:01 crc kubenswrapper[4746]: I0103 03:18:01.374712 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:18:01 crc kubenswrapper[4746]: I0103 03:18:01.374945 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:18:01 crc kubenswrapper[4746]: I0103 03:18:01.375879 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17"} pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 03:18:01 crc kubenswrapper[4746]: I0103 03:18:01.375989 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" containerID="cri-o://87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17" gracePeriod=600 Jan 03 03:18:01 crc kubenswrapper[4746]: I0103 03:18:01.619144 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 03 03:18:01 crc kubenswrapper[4746]: W0103 03:18:01.621319 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod25948708_1b20_4dd8_8073_ec76836c38d3.slice/crio-43bf1ad669e8810465537b0c4bf2c8ba3c451240ce021b7d07eaac14c8f1943a WatchSource:0}: Error finding container 43bf1ad669e8810465537b0c4bf2c8ba3c451240ce021b7d07eaac14c8f1943a: Status 404 returned error can't find the container with id 43bf1ad669e8810465537b0c4bf2c8ba3c451240ce021b7d07eaac14c8f1943a Jan 03 03:18:01 crc kubenswrapper[4746]: I0103 03:18:01.765269 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"25948708-1b20-4dd8-8073-ec76836c38d3","Type":"ContainerStarted","Data":"43bf1ad669e8810465537b0c4bf2c8ba3c451240ce021b7d07eaac14c8f1943a"} Jan 03 03:18:01 crc kubenswrapper[4746]: I0103 03:18:01.767024 4746 generic.go:334] "Generic (PLEG): container finished" podID="00b3b853-9953-4039-964d-841a01708848" containerID="87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17" exitCode=0 Jan 03 03:18:01 crc kubenswrapper[4746]: I0103 03:18:01.767053 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerDied","Data":"87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17"} Jan 03 03:18:01 crc kubenswrapper[4746]: I0103 03:18:01.767068 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerStarted","Data":"2ecdc62c66599c30509d543976f584e5ee130a84e44daf8b712c201fc9026c4d"} Jan 03 03:18:02 crc kubenswrapper[4746]: I0103 03:18:02.773390 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"25948708-1b20-4dd8-8073-ec76836c38d3","Type":"ContainerStarted","Data":"7c6163ad56cbe3fd7d154ca1bd36ab791f3ef18a361b4e513ac0d6b393fb5b94"} Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.111008 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.111344 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.154117 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.170917 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.170892831 podStartE2EDuration="3.170892831s" podCreationTimestamp="2026-01-03 03:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:18:02.789565478 +0000 UTC m=+202.639455783" watchObservedRunningTime="2026-01-03 03:18:03.170892831 +0000 UTC m=+203.020783156" Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.305566 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.305643 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.352946 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.491290 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.491598 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.564242 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.816386 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.822377 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:18:03 crc kubenswrapper[4746]: I0103 03:18:03.824158 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:18:05 crc kubenswrapper[4746]: I0103 03:18:05.154876 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:18:05 crc kubenswrapper[4746]: I0103 03:18:05.199350 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:18:05 crc kubenswrapper[4746]: I0103 03:18:05.581833 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:18:06 crc kubenswrapper[4746]: I0103 03:18:06.788076 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zdjl4"] Jan 03 03:18:06 crc kubenswrapper[4746]: I0103 03:18:06.798454 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zdjl4" podUID="10fd97dc-b59e-4136-9b1e-2084eee07a32" containerName="registry-server" containerID="cri-o://051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca" gracePeriod=2 Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.175080 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.222269 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fd97dc-b59e-4136-9b1e-2084eee07a32-catalog-content\") pod \"10fd97dc-b59e-4136-9b1e-2084eee07a32\" (UID: \"10fd97dc-b59e-4136-9b1e-2084eee07a32\") " Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.222706 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8tfs\" (UniqueName: \"kubernetes.io/projected/10fd97dc-b59e-4136-9b1e-2084eee07a32-kube-api-access-j8tfs\") pod \"10fd97dc-b59e-4136-9b1e-2084eee07a32\" (UID: \"10fd97dc-b59e-4136-9b1e-2084eee07a32\") " Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.222744 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fd97dc-b59e-4136-9b1e-2084eee07a32-utilities\") pod \"10fd97dc-b59e-4136-9b1e-2084eee07a32\" (UID: \"10fd97dc-b59e-4136-9b1e-2084eee07a32\") " Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.223572 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10fd97dc-b59e-4136-9b1e-2084eee07a32-utilities" (OuterVolumeSpecName: "utilities") pod "10fd97dc-b59e-4136-9b1e-2084eee07a32" (UID: "10fd97dc-b59e-4136-9b1e-2084eee07a32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.227938 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10fd97dc-b59e-4136-9b1e-2084eee07a32-kube-api-access-j8tfs" (OuterVolumeSpecName: "kube-api-access-j8tfs") pod "10fd97dc-b59e-4136-9b1e-2084eee07a32" (UID: "10fd97dc-b59e-4136-9b1e-2084eee07a32"). InnerVolumeSpecName "kube-api-access-j8tfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.275181 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10fd97dc-b59e-4136-9b1e-2084eee07a32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10fd97dc-b59e-4136-9b1e-2084eee07a32" (UID: "10fd97dc-b59e-4136-9b1e-2084eee07a32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.324393 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8tfs\" (UniqueName: \"kubernetes.io/projected/10fd97dc-b59e-4136-9b1e-2084eee07a32-kube-api-access-j8tfs\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.324421 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fd97dc-b59e-4136-9b1e-2084eee07a32-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.324430 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fd97dc-b59e-4136-9b1e-2084eee07a32-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.804363 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr2zs" event={"ID":"b4be3b87-9327-4284-87d3-eef1752a461c","Type":"ContainerStarted","Data":"16a659feb32757a1de8dc601fed9baa72ca76df467765adc71219dac02050c50"} Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.807471 4746 generic.go:334] "Generic (PLEG): container finished" podID="10fd97dc-b59e-4136-9b1e-2084eee07a32" containerID="051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca" exitCode=0 Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.807513 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdjl4" event={"ID":"10fd97dc-b59e-4136-9b1e-2084eee07a32","Type":"ContainerDied","Data":"051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca"} Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.807531 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zdjl4" event={"ID":"10fd97dc-b59e-4136-9b1e-2084eee07a32","Type":"ContainerDied","Data":"f81348a25436645c262077f487275c605ab943b4e972511983817da40743298a"} Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.807548 4746 scope.go:117] "RemoveContainer" containerID="051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.807636 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zdjl4" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.809857 4746 generic.go:334] "Generic (PLEG): container finished" podID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" containerID="ca70f27aa7f3cb407f045a089e7c8b794f64d80703fbf68c6a4218c945a3de0a" exitCode=0 Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.809991 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9pk6" event={"ID":"ec6b70ea-cb0f-4bff-a489-69a988a0db5f","Type":"ContainerDied","Data":"ca70f27aa7f3cb407f045a089e7c8b794f64d80703fbf68c6a4218c945a3de0a"} Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.812375 4746 generic.go:334] "Generic (PLEG): container finished" podID="6cefe73c-d6d3-4428-af09-33abe2c70156" containerID="49722885e3b051508c74228fc433a2b3fbb69d46d79217503150af361181f6c9" exitCode=0 Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.812415 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87hhg" event={"ID":"6cefe73c-d6d3-4428-af09-33abe2c70156","Type":"ContainerDied","Data":"49722885e3b051508c74228fc433a2b3fbb69d46d79217503150af361181f6c9"} Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.831080 4746 scope.go:117] "RemoveContainer" containerID="fbb3e70629332741df64304541dad51475509663c6774cfd340ac016d529e65c" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.844249 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zdjl4"] Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.849084 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zdjl4"] Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.870134 4746 scope.go:117] "RemoveContainer" containerID="2ff5663e35199075bbfbeb93811ab79ba84b57b05149bea445f99bbefe886d94" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.897582 4746 scope.go:117] "RemoveContainer" containerID="051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca" Jan 03 03:18:07 crc kubenswrapper[4746]: E0103 03:18:07.898167 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca\": container with ID starting with 051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca not found: ID does not exist" containerID="051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.898209 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca"} err="failed to get container status \"051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca\": rpc error: code = NotFound desc = could not find container \"051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca\": container with ID starting with 051f07e10fb1cb202b9fcd752552fb31dc76e7865b6b79d9165dbc8454206aca not found: ID does not exist" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.898235 4746 scope.go:117] "RemoveContainer" containerID="fbb3e70629332741df64304541dad51475509663c6774cfd340ac016d529e65c" Jan 03 03:18:07 crc kubenswrapper[4746]: E0103 03:18:07.898576 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb3e70629332741df64304541dad51475509663c6774cfd340ac016d529e65c\": container with ID starting with fbb3e70629332741df64304541dad51475509663c6774cfd340ac016d529e65c not found: ID does not exist" containerID="fbb3e70629332741df64304541dad51475509663c6774cfd340ac016d529e65c" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.898608 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb3e70629332741df64304541dad51475509663c6774cfd340ac016d529e65c"} err="failed to get container status \"fbb3e70629332741df64304541dad51475509663c6774cfd340ac016d529e65c\": rpc error: code = NotFound desc = could not find container \"fbb3e70629332741df64304541dad51475509663c6774cfd340ac016d529e65c\": container with ID starting with fbb3e70629332741df64304541dad51475509663c6774cfd340ac016d529e65c not found: ID does not exist" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.898647 4746 scope.go:117] "RemoveContainer" containerID="2ff5663e35199075bbfbeb93811ab79ba84b57b05149bea445f99bbefe886d94" Jan 03 03:18:07 crc kubenswrapper[4746]: E0103 03:18:07.898967 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff5663e35199075bbfbeb93811ab79ba84b57b05149bea445f99bbefe886d94\": container with ID starting with 2ff5663e35199075bbfbeb93811ab79ba84b57b05149bea445f99bbefe886d94 not found: ID does not exist" containerID="2ff5663e35199075bbfbeb93811ab79ba84b57b05149bea445f99bbefe886d94" Jan 03 03:18:07 crc kubenswrapper[4746]: I0103 03:18:07.898993 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff5663e35199075bbfbeb93811ab79ba84b57b05149bea445f99bbefe886d94"} err="failed to get container status \"2ff5663e35199075bbfbeb93811ab79ba84b57b05149bea445f99bbefe886d94\": rpc error: code = NotFound desc = could not find container \"2ff5663e35199075bbfbeb93811ab79ba84b57b05149bea445f99bbefe886d94\": container with ID starting with 2ff5663e35199075bbfbeb93811ab79ba84b57b05149bea445f99bbefe886d94 not found: ID does not exist" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.187051 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xgdf"] Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.187666 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8xgdf" podUID="59b72a40-0fdf-4e53-aa91-95c8dd8f918c" containerName="registry-server" containerID="cri-o://ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee" gracePeriod=2 Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.474032 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10fd97dc-b59e-4136-9b1e-2084eee07a32" path="/var/lib/kubelet/pods/10fd97dc-b59e-4136-9b1e-2084eee07a32/volumes" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.523837 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.540578 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg4f7\" (UniqueName: \"kubernetes.io/projected/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-kube-api-access-dg4f7\") pod \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\" (UID: \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\") " Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.540648 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-catalog-content\") pod \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\" (UID: \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\") " Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.540718 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-utilities\") pod \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\" (UID: \"59b72a40-0fdf-4e53-aa91-95c8dd8f918c\") " Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.542848 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-utilities" (OuterVolumeSpecName: "utilities") pod "59b72a40-0fdf-4e53-aa91-95c8dd8f918c" (UID: "59b72a40-0fdf-4e53-aa91-95c8dd8f918c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.547169 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-kube-api-access-dg4f7" (OuterVolumeSpecName: "kube-api-access-dg4f7") pod "59b72a40-0fdf-4e53-aa91-95c8dd8f918c" (UID: "59b72a40-0fdf-4e53-aa91-95c8dd8f918c"). InnerVolumeSpecName "kube-api-access-dg4f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.564170 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59b72a40-0fdf-4e53-aa91-95c8dd8f918c" (UID: "59b72a40-0fdf-4e53-aa91-95c8dd8f918c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.641613 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.641641 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.641665 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg4f7\" (UniqueName: \"kubernetes.io/projected/59b72a40-0fdf-4e53-aa91-95c8dd8f918c-kube-api-access-dg4f7\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.819212 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9pk6" event={"ID":"ec6b70ea-cb0f-4bff-a489-69a988a0db5f","Type":"ContainerStarted","Data":"6a7cf1024ca84ad5f00225038aef1e06f6e283b32b73360b4fe975b5f279b898"} Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.820948 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87hhg" event={"ID":"6cefe73c-d6d3-4428-af09-33abe2c70156","Type":"ContainerStarted","Data":"d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72"} Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.824111 4746 generic.go:334] "Generic (PLEG): container finished" podID="59b72a40-0fdf-4e53-aa91-95c8dd8f918c" containerID="ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee" exitCode=0 Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.824161 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xgdf" event={"ID":"59b72a40-0fdf-4e53-aa91-95c8dd8f918c","Type":"ContainerDied","Data":"ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee"} Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.824179 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xgdf" event={"ID":"59b72a40-0fdf-4e53-aa91-95c8dd8f918c","Type":"ContainerDied","Data":"8f628cd2ff1e8b615aa638ff8cd9adf0b6d8790d33d131175e3206bd226b971e"} Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.824196 4746 scope.go:117] "RemoveContainer" containerID="ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.824255 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xgdf" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.828741 4746 generic.go:334] "Generic (PLEG): container finished" podID="b4be3b87-9327-4284-87d3-eef1752a461c" containerID="16a659feb32757a1de8dc601fed9baa72ca76df467765adc71219dac02050c50" exitCode=0 Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.828801 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr2zs" event={"ID":"b4be3b87-9327-4284-87d3-eef1752a461c","Type":"ContainerDied","Data":"16a659feb32757a1de8dc601fed9baa72ca76df467765adc71219dac02050c50"} Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.850828 4746 scope.go:117] "RemoveContainer" containerID="5f5e488ac2e6d739865084e2e5281af253c91f63f882231b0976a7b8776b55dd" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.853517 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s9pk6" podStartSLOduration=1.6492572939999999 podStartE2EDuration="55.853498187s" podCreationTimestamp="2026-01-03 03:17:13 +0000 UTC" firstStartedPulling="2026-01-03 03:17:14.213837756 +0000 UTC m=+154.063728061" lastFinishedPulling="2026-01-03 03:18:08.418078629 +0000 UTC m=+208.267968954" observedRunningTime="2026-01-03 03:18:08.83831211 +0000 UTC m=+208.688202425" watchObservedRunningTime="2026-01-03 03:18:08.853498187 +0000 UTC m=+208.703388502" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.864738 4746 scope.go:117] "RemoveContainer" containerID="7ed1011da91a64c18c80c9a88c79fed7b158c4a53d9311b470881b52b59675c6" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.885308 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xgdf"] Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.888077 4746 scope.go:117] "RemoveContainer" containerID="ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.888534 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xgdf"] Jan 03 03:18:08 crc kubenswrapper[4746]: E0103 03:18:08.888916 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee\": container with ID starting with ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee not found: ID does not exist" containerID="ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.888953 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee"} err="failed to get container status \"ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee\": rpc error: code = NotFound desc = could not find container \"ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee\": container with ID starting with ea32208c27323c3f69b58fc3cc7637814879122cae057115303a1ce1a06884ee not found: ID does not exist" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.888972 4746 scope.go:117] "RemoveContainer" containerID="5f5e488ac2e6d739865084e2e5281af253c91f63f882231b0976a7b8776b55dd" Jan 03 03:18:08 crc kubenswrapper[4746]: E0103 03:18:08.889866 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f5e488ac2e6d739865084e2e5281af253c91f63f882231b0976a7b8776b55dd\": container with ID starting with 5f5e488ac2e6d739865084e2e5281af253c91f63f882231b0976a7b8776b55dd not found: ID does not exist" containerID="5f5e488ac2e6d739865084e2e5281af253c91f63f882231b0976a7b8776b55dd" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.889891 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f5e488ac2e6d739865084e2e5281af253c91f63f882231b0976a7b8776b55dd"} err="failed to get container status \"5f5e488ac2e6d739865084e2e5281af253c91f63f882231b0976a7b8776b55dd\": rpc error: code = NotFound desc = could not find container \"5f5e488ac2e6d739865084e2e5281af253c91f63f882231b0976a7b8776b55dd\": container with ID starting with 5f5e488ac2e6d739865084e2e5281af253c91f63f882231b0976a7b8776b55dd not found: ID does not exist" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.889907 4746 scope.go:117] "RemoveContainer" containerID="7ed1011da91a64c18c80c9a88c79fed7b158c4a53d9311b470881b52b59675c6" Jan 03 03:18:08 crc kubenswrapper[4746]: E0103 03:18:08.890149 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ed1011da91a64c18c80c9a88c79fed7b158c4a53d9311b470881b52b59675c6\": container with ID starting with 7ed1011da91a64c18c80c9a88c79fed7b158c4a53d9311b470881b52b59675c6 not found: ID does not exist" containerID="7ed1011da91a64c18c80c9a88c79fed7b158c4a53d9311b470881b52b59675c6" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.890168 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ed1011da91a64c18c80c9a88c79fed7b158c4a53d9311b470881b52b59675c6"} err="failed to get container status \"7ed1011da91a64c18c80c9a88c79fed7b158c4a53d9311b470881b52b59675c6\": rpc error: code = NotFound desc = could not find container \"7ed1011da91a64c18c80c9a88c79fed7b158c4a53d9311b470881b52b59675c6\": container with ID starting with 7ed1011da91a64c18c80c9a88c79fed7b158c4a53d9311b470881b52b59675c6 not found: ID does not exist" Jan 03 03:18:08 crc kubenswrapper[4746]: I0103 03:18:08.901032 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-87hhg" podStartSLOduration=3.920948439 podStartE2EDuration="53.901010139s" podCreationTimestamp="2026-01-03 03:17:15 +0000 UTC" firstStartedPulling="2026-01-03 03:17:18.419799516 +0000 UTC m=+158.269689821" lastFinishedPulling="2026-01-03 03:18:08.399861226 +0000 UTC m=+208.249751521" observedRunningTime="2026-01-03 03:18:08.895983894 +0000 UTC m=+208.745874199" watchObservedRunningTime="2026-01-03 03:18:08.901010139 +0000 UTC m=+208.750900444" Jan 03 03:18:09 crc kubenswrapper[4746]: I0103 03:18:09.838815 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr2zs" event={"ID":"b4be3b87-9327-4284-87d3-eef1752a461c","Type":"ContainerStarted","Data":"8a65e97250f281039618ffb981e5ffd169d6e74f947ec8e9ab5368fb31d917ba"} Jan 03 03:18:09 crc kubenswrapper[4746]: I0103 03:18:09.862867 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zr2zs" podStartSLOduration=2.836893529 podStartE2EDuration="53.862846369s" podCreationTimestamp="2026-01-03 03:17:16 +0000 UTC" firstStartedPulling="2026-01-03 03:17:18.409203033 +0000 UTC m=+158.259093338" lastFinishedPulling="2026-01-03 03:18:09.435155873 +0000 UTC m=+209.285046178" observedRunningTime="2026-01-03 03:18:09.858862439 +0000 UTC m=+209.708752754" watchObservedRunningTime="2026-01-03 03:18:09.862846369 +0000 UTC m=+209.712736684" Jan 03 03:18:10 crc kubenswrapper[4746]: I0103 03:18:10.473251 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b72a40-0fdf-4e53-aa91-95c8dd8f918c" path="/var/lib/kubelet/pods/59b72a40-0fdf-4e53-aa91-95c8dd8f918c/volumes" Jan 03 03:18:13 crc kubenswrapper[4746]: I0103 03:18:13.711478 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:18:13 crc kubenswrapper[4746]: I0103 03:18:13.711794 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:18:13 crc kubenswrapper[4746]: I0103 03:18:13.751961 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:18:13 crc kubenswrapper[4746]: I0103 03:18:13.895210 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:18:16 crc kubenswrapper[4746]: I0103 03:18:16.203939 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s9pk6"] Jan 03 03:18:16 crc kubenswrapper[4746]: I0103 03:18:16.204597 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s9pk6" podUID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" containerName="registry-server" containerID="cri-o://6a7cf1024ca84ad5f00225038aef1e06f6e283b32b73360b4fe975b5f279b898" gracePeriod=2 Jan 03 03:18:16 crc kubenswrapper[4746]: I0103 03:18:16.323169 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:18:16 crc kubenswrapper[4746]: I0103 03:18:16.323904 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:18:16 crc kubenswrapper[4746]: I0103 03:18:16.375027 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:18:16 crc kubenswrapper[4746]: I0103 03:18:16.807926 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:18:16 crc kubenswrapper[4746]: I0103 03:18:16.807968 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:18:16 crc kubenswrapper[4746]: I0103 03:18:16.869802 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:18:16 crc kubenswrapper[4746]: I0103 03:18:16.914360 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:18:16 crc kubenswrapper[4746]: I0103 03:18:16.914790 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:18:17 crc kubenswrapper[4746]: I0103 03:18:17.876796 4746 generic.go:334] "Generic (PLEG): container finished" podID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" containerID="6a7cf1024ca84ad5f00225038aef1e06f6e283b32b73360b4fe975b5f279b898" exitCode=0 Jan 03 03:18:17 crc kubenswrapper[4746]: I0103 03:18:17.876872 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9pk6" event={"ID":"ec6b70ea-cb0f-4bff-a489-69a988a0db5f","Type":"ContainerDied","Data":"6a7cf1024ca84ad5f00225038aef1e06f6e283b32b73360b4fe975b5f279b898"} Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.541607 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.591699 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zr2zs"] Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.591979 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zr2zs" podUID="b4be3b87-9327-4284-87d3-eef1752a461c" containerName="registry-server" containerID="cri-o://8a65e97250f281039618ffb981e5ffd169d6e74f947ec8e9ab5368fb31d917ba" gracePeriod=2 Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.679191 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fljzk\" (UniqueName: \"kubernetes.io/projected/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-kube-api-access-fljzk\") pod \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\" (UID: \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\") " Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.679286 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-utilities\") pod \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\" (UID: \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\") " Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.679402 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-catalog-content\") pod \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\" (UID: \"ec6b70ea-cb0f-4bff-a489-69a988a0db5f\") " Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.680287 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-utilities" (OuterVolumeSpecName: "utilities") pod "ec6b70ea-cb0f-4bff-a489-69a988a0db5f" (UID: "ec6b70ea-cb0f-4bff-a489-69a988a0db5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.685699 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-kube-api-access-fljzk" (OuterVolumeSpecName: "kube-api-access-fljzk") pod "ec6b70ea-cb0f-4bff-a489-69a988a0db5f" (UID: "ec6b70ea-cb0f-4bff-a489-69a988a0db5f"). InnerVolumeSpecName "kube-api-access-fljzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.728258 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec6b70ea-cb0f-4bff-a489-69a988a0db5f" (UID: "ec6b70ea-cb0f-4bff-a489-69a988a0db5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.781144 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.781197 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fljzk\" (UniqueName: \"kubernetes.io/projected/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-kube-api-access-fljzk\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.781212 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6b70ea-cb0f-4bff-a489-69a988a0db5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.887896 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9pk6" event={"ID":"ec6b70ea-cb0f-4bff-a489-69a988a0db5f","Type":"ContainerDied","Data":"aaabe690942bf30d7f9923fd5cdd643337248b95dfbca218730d2de37e2a4790"} Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.887942 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9pk6" Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.887952 4746 scope.go:117] "RemoveContainer" containerID="6a7cf1024ca84ad5f00225038aef1e06f6e283b32b73360b4fe975b5f279b898" Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.905341 4746 scope.go:117] "RemoveContainer" containerID="ca70f27aa7f3cb407f045a089e7c8b794f64d80703fbf68c6a4218c945a3de0a" Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.927085 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s9pk6"] Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.928547 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s9pk6"] Jan 03 03:18:19 crc kubenswrapper[4746]: I0103 03:18:19.930007 4746 scope.go:117] "RemoveContainer" containerID="4e98046c8da9dc92a0ec2cd658df3dce64ec2cbaf6af6b39f04deb647411c67b" Jan 03 03:18:20 crc kubenswrapper[4746]: I0103 03:18:20.470175 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" path="/var/lib/kubelet/pods/ec6b70ea-cb0f-4bff-a489-69a988a0db5f/volumes" Jan 03 03:18:21 crc kubenswrapper[4746]: I0103 03:18:21.901141 4746 generic.go:334] "Generic (PLEG): container finished" podID="b4be3b87-9327-4284-87d3-eef1752a461c" containerID="8a65e97250f281039618ffb981e5ffd169d6e74f947ec8e9ab5368fb31d917ba" exitCode=0 Jan 03 03:18:21 crc kubenswrapper[4746]: I0103 03:18:21.901229 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr2zs" event={"ID":"b4be3b87-9327-4284-87d3-eef1752a461c","Type":"ContainerDied","Data":"8a65e97250f281039618ffb981e5ffd169d6e74f947ec8e9ab5368fb31d917ba"} Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.001584 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.122494 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f8xm\" (UniqueName: \"kubernetes.io/projected/b4be3b87-9327-4284-87d3-eef1752a461c-kube-api-access-4f8xm\") pod \"b4be3b87-9327-4284-87d3-eef1752a461c\" (UID: \"b4be3b87-9327-4284-87d3-eef1752a461c\") " Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.122805 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4be3b87-9327-4284-87d3-eef1752a461c-utilities\") pod \"b4be3b87-9327-4284-87d3-eef1752a461c\" (UID: \"b4be3b87-9327-4284-87d3-eef1752a461c\") " Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.122871 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4be3b87-9327-4284-87d3-eef1752a461c-catalog-content\") pod \"b4be3b87-9327-4284-87d3-eef1752a461c\" (UID: \"b4be3b87-9327-4284-87d3-eef1752a461c\") " Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.123616 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4be3b87-9327-4284-87d3-eef1752a461c-utilities" (OuterVolumeSpecName: "utilities") pod "b4be3b87-9327-4284-87d3-eef1752a461c" (UID: "b4be3b87-9327-4284-87d3-eef1752a461c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.129601 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4be3b87-9327-4284-87d3-eef1752a461c-kube-api-access-4f8xm" (OuterVolumeSpecName: "kube-api-access-4f8xm") pod "b4be3b87-9327-4284-87d3-eef1752a461c" (UID: "b4be3b87-9327-4284-87d3-eef1752a461c"). InnerVolumeSpecName "kube-api-access-4f8xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.225257 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f8xm\" (UniqueName: \"kubernetes.io/projected/b4be3b87-9327-4284-87d3-eef1752a461c-kube-api-access-4f8xm\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.225318 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4be3b87-9327-4284-87d3-eef1752a461c-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.265110 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4be3b87-9327-4284-87d3-eef1752a461c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4be3b87-9327-4284-87d3-eef1752a461c" (UID: "b4be3b87-9327-4284-87d3-eef1752a461c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.327309 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4be3b87-9327-4284-87d3-eef1752a461c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.914596 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zr2zs" event={"ID":"b4be3b87-9327-4284-87d3-eef1752a461c","Type":"ContainerDied","Data":"564543c3ea7d24ce1be3307df8b4fbbe92340d4e0a11eea8d150c90491fed8cb"} Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.914646 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zr2zs" Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.914649 4746 scope.go:117] "RemoveContainer" containerID="8a65e97250f281039618ffb981e5ffd169d6e74f947ec8e9ab5368fb31d917ba" Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.943511 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zr2zs"] Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.946097 4746 scope.go:117] "RemoveContainer" containerID="16a659feb32757a1de8dc601fed9baa72ca76df467765adc71219dac02050c50" Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.946435 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zr2zs"] Jan 03 03:18:23 crc kubenswrapper[4746]: I0103 03:18:23.961509 4746 scope.go:117] "RemoveContainer" containerID="d996d2320ff9000aadbb8ba142f61b2db994bfe323b35f6caa65c935d67310d0" Jan 03 03:18:24 crc kubenswrapper[4746]: I0103 03:18:24.475619 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4be3b87-9327-4284-87d3-eef1752a461c" path="/var/lib/kubelet/pods/b4be3b87-9327-4284-87d3-eef1752a461c/volumes" Jan 03 03:18:25 crc kubenswrapper[4746]: I0103 03:18:25.040407 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sw9vc"] Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.715532 4746 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.716246 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4be3b87-9327-4284-87d3-eef1752a461c" containerName="registry-server" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716258 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4be3b87-9327-4284-87d3-eef1752a461c" containerName="registry-server" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.716270 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4be3b87-9327-4284-87d3-eef1752a461c" containerName="extract-utilities" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716276 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4be3b87-9327-4284-87d3-eef1752a461c" containerName="extract-utilities" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.716285 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fd97dc-b59e-4136-9b1e-2084eee07a32" containerName="registry-server" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716291 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fd97dc-b59e-4136-9b1e-2084eee07a32" containerName="registry-server" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.716300 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fd97dc-b59e-4136-9b1e-2084eee07a32" containerName="extract-utilities" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716306 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fd97dc-b59e-4136-9b1e-2084eee07a32" containerName="extract-utilities" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.716314 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fd97dc-b59e-4136-9b1e-2084eee07a32" containerName="extract-content" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716320 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fd97dc-b59e-4136-9b1e-2084eee07a32" containerName="extract-content" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.716330 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b72a40-0fdf-4e53-aa91-95c8dd8f918c" containerName="extract-content" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716335 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b72a40-0fdf-4e53-aa91-95c8dd8f918c" containerName="extract-content" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.716348 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" containerName="extract-content" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716355 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" containerName="extract-content" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.716363 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4be3b87-9327-4284-87d3-eef1752a461c" containerName="extract-content" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716368 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4be3b87-9327-4284-87d3-eef1752a461c" containerName="extract-content" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.716381 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" containerName="registry-server" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716387 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" containerName="registry-server" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.716394 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b72a40-0fdf-4e53-aa91-95c8dd8f918c" containerName="registry-server" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716400 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b72a40-0fdf-4e53-aa91-95c8dd8f918c" containerName="registry-server" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.716408 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" containerName="extract-utilities" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716413 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" containerName="extract-utilities" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.716425 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b72a40-0fdf-4e53-aa91-95c8dd8f918c" containerName="extract-utilities" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716431 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b72a40-0fdf-4e53-aa91-95c8dd8f918c" containerName="extract-utilities" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716542 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6b70ea-cb0f-4bff-a489-69a988a0db5f" containerName="registry-server" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716551 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4be3b87-9327-4284-87d3-eef1752a461c" containerName="registry-server" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716559 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="59b72a40-0fdf-4e53-aa91-95c8dd8f918c" containerName="registry-server" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716569 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="10fd97dc-b59e-4136-9b1e-2084eee07a32" containerName="registry-server" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.716924 4746 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.717090 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.717159 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a" gracePeriod=15 Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.717201 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044" gracePeriod=15 Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.717283 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2" gracePeriod=15 Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.717319 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81" gracePeriod=15 Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.717343 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7" gracePeriod=15 Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719078 4746 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.719205 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719213 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.719224 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719231 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.719238 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719244 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.719252 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719257 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.719266 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719272 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.719280 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719287 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 03:18:39 crc kubenswrapper[4746]: E0103 03:18:39.719299 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719305 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719380 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719387 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719394 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719404 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719411 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.719416 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.762644 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.822109 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.822208 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.822292 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.822329 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.822391 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.822432 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.822483 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.822528 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923177 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923227 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923295 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923335 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923331 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923372 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923353 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923394 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923513 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923561 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923605 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923625 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923639 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923710 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923718 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:39 crc kubenswrapper[4746]: I0103 03:18:39.923738 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.064214 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:18:40 crc kubenswrapper[4746]: W0103 03:18:40.095854 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-47e4ce44485caa2b07692df71442007d295f61840bfe7df7038fb9a3045c486a WatchSource:0}: Error finding container 47e4ce44485caa2b07692df71442007d295f61840bfe7df7038fb9a3045c486a: Status 404 returned error can't find the container with id 47e4ce44485caa2b07692df71442007d295f61840bfe7df7038fb9a3045c486a Jan 03 03:18:40 crc kubenswrapper[4746]: E0103 03:18:40.100606 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.66:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18871a526dd9510d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 03:18:40.098808077 +0000 UTC m=+239.948698392,LastTimestamp:2026-01-03 03:18:40.098808077 +0000 UTC m=+239.948698392,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.104467 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"47e4ce44485caa2b07692df71442007d295f61840bfe7df7038fb9a3045c486a"} Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.107171 4746 generic.go:334] "Generic (PLEG): container finished" podID="25948708-1b20-4dd8-8073-ec76836c38d3" containerID="7c6163ad56cbe3fd7d154ca1bd36ab791f3ef18a361b4e513ac0d6b393fb5b94" exitCode=0 Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.107320 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"25948708-1b20-4dd8-8073-ec76836c38d3","Type":"ContainerDied","Data":"7c6163ad56cbe3fd7d154ca1bd36ab791f3ef18a361b4e513ac0d6b393fb5b94"} Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.108214 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.108666 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.109187 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.110504 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.111570 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.112279 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7" exitCode=0 Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.112302 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044" exitCode=0 Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.112310 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81" exitCode=0 Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.112319 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2" exitCode=2 Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.112372 4746 scope.go:117] "RemoveContainer" containerID="d728494c0f7d9fae5448d2da49957d0c49a212a9a36b377ad500fbc83f664e23" Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.467213 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.467474 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:40 crc kubenswrapper[4746]: I0103 03:18:40.468262 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.121103 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8abcfbc2f3d5e34a617079abe06b0f864dada27bd01712ce883da294a69aaed0"} Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.122932 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.123529 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.126714 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.466071 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.466741 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.467445 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.646436 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25948708-1b20-4dd8-8073-ec76836c38d3-kubelet-dir\") pod \"25948708-1b20-4dd8-8073-ec76836c38d3\" (UID: \"25948708-1b20-4dd8-8073-ec76836c38d3\") " Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.646605 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/25948708-1b20-4dd8-8073-ec76836c38d3-var-lock\") pod \"25948708-1b20-4dd8-8073-ec76836c38d3\" (UID: \"25948708-1b20-4dd8-8073-ec76836c38d3\") " Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.646641 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25948708-1b20-4dd8-8073-ec76836c38d3-kube-api-access\") pod \"25948708-1b20-4dd8-8073-ec76836c38d3\" (UID: \"25948708-1b20-4dd8-8073-ec76836c38d3\") " Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.646592 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25948708-1b20-4dd8-8073-ec76836c38d3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "25948708-1b20-4dd8-8073-ec76836c38d3" (UID: "25948708-1b20-4dd8-8073-ec76836c38d3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.646624 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25948708-1b20-4dd8-8073-ec76836c38d3-var-lock" (OuterVolumeSpecName: "var-lock") pod "25948708-1b20-4dd8-8073-ec76836c38d3" (UID: "25948708-1b20-4dd8-8073-ec76836c38d3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.647027 4746 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/25948708-1b20-4dd8-8073-ec76836c38d3-var-lock\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.647055 4746 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25948708-1b20-4dd8-8073-ec76836c38d3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.654022 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25948708-1b20-4dd8-8073-ec76836c38d3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "25948708-1b20-4dd8-8073-ec76836c38d3" (UID: "25948708-1b20-4dd8-8073-ec76836c38d3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:18:41 crc kubenswrapper[4746]: I0103 03:18:41.748728 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/25948708-1b20-4dd8-8073-ec76836c38d3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.135479 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"25948708-1b20-4dd8-8073-ec76836c38d3","Type":"ContainerDied","Data":"43bf1ad669e8810465537b0c4bf2c8ba3c451240ce021b7d07eaac14c8f1943a"} Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.137267 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43bf1ad669e8810465537b0c4bf2c8ba3c451240ce021b7d07eaac14c8f1943a" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.135516 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.186169 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.187308 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.190077 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.190999 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.191797 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.192279 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.192860 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.363365 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.363491 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.363578 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.363613 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.363713 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.363816 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.364358 4746 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.364393 4746 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.364411 4746 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:42 crc kubenswrapper[4746]: I0103 03:18:42.474834 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.147277 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.148442 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a" exitCode=0 Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.148504 4746 scope.go:117] "RemoveContainer" containerID="5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.148595 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.149731 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.150105 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.150600 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.152777 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.153096 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.153382 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.162605 4746 scope.go:117] "RemoveContainer" containerID="db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.176360 4746 scope.go:117] "RemoveContainer" containerID="38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.189808 4746 scope.go:117] "RemoveContainer" containerID="864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.201297 4746 scope.go:117] "RemoveContainer" containerID="52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.215052 4746 scope.go:117] "RemoveContainer" containerID="3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.233890 4746 scope.go:117] "RemoveContainer" containerID="5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7" Jan 03 03:18:43 crc kubenswrapper[4746]: E0103 03:18:43.234443 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\": container with ID starting with 5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7 not found: ID does not exist" containerID="5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.234473 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7"} err="failed to get container status \"5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\": rpc error: code = NotFound desc = could not find container \"5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7\": container with ID starting with 5c1df6ed5655c7b62971c8352efa69b094eaf7dd4b76cdd7ab82db7989d240d7 not found: ID does not exist" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.234500 4746 scope.go:117] "RemoveContainer" containerID="db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044" Jan 03 03:18:43 crc kubenswrapper[4746]: E0103 03:18:43.234733 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\": container with ID starting with db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044 not found: ID does not exist" containerID="db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.234756 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044"} err="failed to get container status \"db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\": rpc error: code = NotFound desc = could not find container \"db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044\": container with ID starting with db354a631b20fb64a15b8b345fe7274dfb09ad0af6acfaff4213c087ac6e7044 not found: ID does not exist" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.234771 4746 scope.go:117] "RemoveContainer" containerID="38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81" Jan 03 03:18:43 crc kubenswrapper[4746]: E0103 03:18:43.241499 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\": container with ID starting with 38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81 not found: ID does not exist" containerID="38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.241544 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81"} err="failed to get container status \"38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\": rpc error: code = NotFound desc = could not find container \"38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81\": container with ID starting with 38303bd344a5b9cb5fa178305ae097c427ca219671ee7f7e2ffa3879401c3a81 not found: ID does not exist" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.241597 4746 scope.go:117] "RemoveContainer" containerID="864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2" Jan 03 03:18:43 crc kubenswrapper[4746]: E0103 03:18:43.242548 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\": container with ID starting with 864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2 not found: ID does not exist" containerID="864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.242611 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2"} err="failed to get container status \"864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\": rpc error: code = NotFound desc = could not find container \"864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2\": container with ID starting with 864ee52be08573001544bd78972e28fda26959c0c98503bf822bcdb73d9f8ce2 not found: ID does not exist" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.242631 4746 scope.go:117] "RemoveContainer" containerID="52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a" Jan 03 03:18:43 crc kubenswrapper[4746]: E0103 03:18:43.244231 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\": container with ID starting with 52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a not found: ID does not exist" containerID="52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.244293 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a"} err="failed to get container status \"52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\": rpc error: code = NotFound desc = could not find container \"52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a\": container with ID starting with 52a03d02decba5d19a32d2beab3beade955af78df93d669e832c19237bf8b16a not found: ID does not exist" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.244313 4746 scope.go:117] "RemoveContainer" containerID="3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa" Jan 03 03:18:43 crc kubenswrapper[4746]: E0103 03:18:43.244555 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\": container with ID starting with 3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa not found: ID does not exist" containerID="3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa" Jan 03 03:18:43 crc kubenswrapper[4746]: I0103 03:18:43.244572 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa"} err="failed to get container status \"3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\": rpc error: code = NotFound desc = could not find container \"3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa\": container with ID starting with 3cd78bacfea9d51c7b1ce336574147275a6f5b3d0e2a303ed3887a36665cd3aa not found: ID does not exist" Jan 03 03:18:45 crc kubenswrapper[4746]: E0103 03:18:45.188994 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.66:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18871a526dd9510d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-03 03:18:40.098808077 +0000 UTC m=+239.948698392,LastTimestamp:2026-01-03 03:18:40.098808077 +0000 UTC m=+239.948698392,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 03 03:18:46 crc kubenswrapper[4746]: E0103 03:18:46.114508 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:46 crc kubenswrapper[4746]: E0103 03:18:46.114867 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:46 crc kubenswrapper[4746]: E0103 03:18:46.115170 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:46 crc kubenswrapper[4746]: E0103 03:18:46.115481 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:46 crc kubenswrapper[4746]: E0103 03:18:46.115863 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:46 crc kubenswrapper[4746]: I0103 03:18:46.115929 4746 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 03 03:18:46 crc kubenswrapper[4746]: E0103 03:18:46.116344 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="200ms" Jan 03 03:18:46 crc kubenswrapper[4746]: E0103 03:18:46.317199 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="400ms" Jan 03 03:18:46 crc kubenswrapper[4746]: E0103 03:18:46.718951 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="800ms" Jan 03 03:18:47 crc kubenswrapper[4746]: E0103 03:18:47.520474 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="1.6s" Jan 03 03:18:49 crc kubenswrapper[4746]: E0103 03:18:49.122365 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="3.2s" Jan 03 03:18:50 crc kubenswrapper[4746]: I0103 03:18:50.074258 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" containerName="oauth-openshift" containerID="cri-o://0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85" gracePeriod=15 Jan 03 03:18:50 crc kubenswrapper[4746]: I0103 03:18:50.470307 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:50 crc kubenswrapper[4746]: I0103 03:18:50.471737 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.023149 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.024462 4746 status_manager.go:851] "Failed to get status for pod" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sw9vc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.025184 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.025765 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.179503 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-session\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.179630 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-trusted-ca-bundle\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.179758 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-ocp-branding-template\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.179827 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-error\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.179897 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-provider-selection\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.179990 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-service-ca\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.180079 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-cliconfig\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.180114 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-serving-cert\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.180174 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d8cd430-5229-4772-8c83-9fbdbeaf54de-audit-dir\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.180213 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-audit-policies\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.180267 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-idp-0-file-data\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.180308 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-login\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.180347 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs9xl\" (UniqueName: \"kubernetes.io/projected/6d8cd430-5229-4772-8c83-9fbdbeaf54de-kube-api-access-cs9xl\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.180407 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-router-certs\") pod \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\" (UID: \"6d8cd430-5229-4772-8c83-9fbdbeaf54de\") " Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.181430 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.181919 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.182068 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.182163 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d8cd430-5229-4772-8c83-9fbdbeaf54de-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.182435 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.188699 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.190110 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.190848 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.191193 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.191444 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.191841 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.192723 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.193792 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8cd430-5229-4772-8c83-9fbdbeaf54de-kube-api-access-cs9xl" (OuterVolumeSpecName: "kube-api-access-cs9xl") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "kube-api-access-cs9xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.194166 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6d8cd430-5229-4772-8c83-9fbdbeaf54de" (UID: "6d8cd430-5229-4772-8c83-9fbdbeaf54de"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.208122 4746 generic.go:334] "Generic (PLEG): container finished" podID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" containerID="0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85" exitCode=0 Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.208189 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.208196 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" event={"ID":"6d8cd430-5229-4772-8c83-9fbdbeaf54de","Type":"ContainerDied","Data":"0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85"} Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.208245 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" event={"ID":"6d8cd430-5229-4772-8c83-9fbdbeaf54de","Type":"ContainerDied","Data":"5fb99619bfbbdabcf6413d2dd121e96f099962acafca451f14545b5b9109236c"} Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.208273 4746 scope.go:117] "RemoveContainer" containerID="0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.209301 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.209594 4746 status_manager.go:851] "Failed to get status for pod" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sw9vc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.210146 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.236490 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.237239 4746 status_manager.go:851] "Failed to get status for pod" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sw9vc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.237497 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.247638 4746 scope.go:117] "RemoveContainer" containerID="0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85" Jan 03 03:18:51 crc kubenswrapper[4746]: E0103 03:18:51.248880 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85\": container with ID starting with 0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85 not found: ID does not exist" containerID="0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.248973 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85"} err="failed to get container status \"0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85\": rpc error: code = NotFound desc = could not find container \"0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85\": container with ID starting with 0cdc6d20e63ee220a3837eba8b45ebd9bee6a934dc2ca363c2679ab2f3f42e85 not found: ID does not exist" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282158 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282283 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282307 4746 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d8cd430-5229-4772-8c83-9fbdbeaf54de-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282324 4746 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282343 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282359 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282375 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs9xl\" (UniqueName: \"kubernetes.io/projected/6d8cd430-5229-4772-8c83-9fbdbeaf54de-kube-api-access-cs9xl\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282392 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282411 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282427 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282444 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282461 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282480 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:51 crc kubenswrapper[4746]: I0103 03:18:51.282495 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d8cd430-5229-4772-8c83-9fbdbeaf54de-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:18:52 crc kubenswrapper[4746]: E0103 03:18:52.324234 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="6.4s" Jan 03 03:18:52 crc kubenswrapper[4746]: I0103 03:18:52.465029 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:52 crc kubenswrapper[4746]: I0103 03:18:52.465947 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:52 crc kubenswrapper[4746]: I0103 03:18:52.466586 4746 status_manager.go:851] "Failed to get status for pod" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sw9vc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:52 crc kubenswrapper[4746]: I0103 03:18:52.467581 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:52 crc kubenswrapper[4746]: I0103 03:18:52.480260 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cb36226-f723-4cc8-b765-07aaa195cd44" Jan 03 03:18:52 crc kubenswrapper[4746]: I0103 03:18:52.480304 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cb36226-f723-4cc8-b765-07aaa195cd44" Jan 03 03:18:52 crc kubenswrapper[4746]: E0103 03:18:52.480831 4746 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:52 crc kubenswrapper[4746]: I0103 03:18:52.481287 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:53 crc kubenswrapper[4746]: I0103 03:18:53.224836 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 03 03:18:53 crc kubenswrapper[4746]: I0103 03:18:53.225360 4746 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e" exitCode=1 Jan 03 03:18:53 crc kubenswrapper[4746]: I0103 03:18:53.225467 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e"} Jan 03 03:18:53 crc kubenswrapper[4746]: I0103 03:18:53.225901 4746 scope.go:117] "RemoveContainer" containerID="372035a13385065c9aad93efc16314ba7b56827d7975882580314bf54bdb284e" Jan 03 03:18:53 crc kubenswrapper[4746]: I0103 03:18:53.227083 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:53 crc kubenswrapper[4746]: I0103 03:18:53.227902 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"78d2e4135f8d87ec319db7375c869dbbea901abc412e9f126602835538a956c8"} Jan 03 03:18:53 crc kubenswrapper[4746]: I0103 03:18:53.227956 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"26fb249c83b7182867a365479cb9a414759c76c85335d26187a0409e6d0a36e7"} Jan 03 03:18:53 crc kubenswrapper[4746]: I0103 03:18:53.227898 4746 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:53 crc kubenswrapper[4746]: I0103 03:18:53.228207 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:53 crc kubenswrapper[4746]: I0103 03:18:53.228542 4746 status_manager.go:851] "Failed to get status for pod" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sw9vc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:53 crc kubenswrapper[4746]: I0103 03:18:53.398857 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.241063 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.241693 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c23e60d83f1dc46bd271975ca3e0e479063cb0e165231983ba44d96d3feaf93b"} Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.242565 4746 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.243013 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.243750 4746 status_manager.go:851] "Failed to get status for pod" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sw9vc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.244302 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.244326 4746 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="78d2e4135f8d87ec319db7375c869dbbea901abc412e9f126602835538a956c8" exitCode=0 Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.244374 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"78d2e4135f8d87ec319db7375c869dbbea901abc412e9f126602835538a956c8"} Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.244924 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cb36226-f723-4cc8-b765-07aaa195cd44" Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.244969 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cb36226-f723-4cc8-b765-07aaa195cd44" Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.245049 4746 status_manager.go:851] "Failed to get status for pod" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" pod="openshift-authentication/oauth-openshift-558db77b4-sw9vc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-sw9vc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:54 crc kubenswrapper[4746]: E0103 03:18:54.245640 4746 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.245694 4746 status_manager.go:851] "Failed to get status for pod" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.246281 4746 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:54 crc kubenswrapper[4746]: I0103 03:18:54.246609 4746 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 03 03:18:55 crc kubenswrapper[4746]: I0103 03:18:55.258093 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0b2e9b9315daa398314c4f3d81e65f0465c419cbdbe3d53473e6998e1cf69a69"} Jan 03 03:18:56 crc kubenswrapper[4746]: I0103 03:18:56.265855 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eb6e851f724411573515bffc681509678728e3f0f07e06b9c1da92bbd5aac15f"} Jan 03 03:18:57 crc kubenswrapper[4746]: I0103 03:18:57.056016 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:18:57 crc kubenswrapper[4746]: I0103 03:18:57.276421 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2ce627c0b9fcdafca051a01384ceba5ba7d0543a556a9ac3cfd627f055683fef"} Jan 03 03:18:57 crc kubenswrapper[4746]: I0103 03:18:57.276492 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"067f7fa94d38388b306606faea119a26e18d269b21fbe2ea74d96f6dadc4374c"} Jan 03 03:18:58 crc kubenswrapper[4746]: I0103 03:18:58.285426 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"92d3f10336f37d92a02a46b4a43308a2df9208d3743da92a297dc73c53ae9161"} Jan 03 03:18:58 crc kubenswrapper[4746]: I0103 03:18:58.285608 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:58 crc kubenswrapper[4746]: I0103 03:18:58.285706 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cb36226-f723-4cc8-b765-07aaa195cd44" Jan 03 03:18:58 crc kubenswrapper[4746]: I0103 03:18:58.285734 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cb36226-f723-4cc8-b765-07aaa195cd44" Jan 03 03:18:58 crc kubenswrapper[4746]: I0103 03:18:58.293735 4746 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:18:59 crc kubenswrapper[4746]: I0103 03:18:59.290240 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cb36226-f723-4cc8-b765-07aaa195cd44" Jan 03 03:18:59 crc kubenswrapper[4746]: I0103 03:18:59.290547 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cb36226-f723-4cc8-b765-07aaa195cd44" Jan 03 03:19:00 crc kubenswrapper[4746]: I0103 03:19:00.490329 4746 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a773cc4a-2d7c-471a-8ef3-4e8d9ff62a6d" Jan 03 03:19:03 crc kubenswrapper[4746]: I0103 03:19:03.398738 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:19:03 crc kubenswrapper[4746]: I0103 03:19:03.405886 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:19:04 crc kubenswrapper[4746]: I0103 03:19:04.326981 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 03 03:19:10 crc kubenswrapper[4746]: I0103 03:19:10.818918 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 03 03:19:11 crc kubenswrapper[4746]: I0103 03:19:11.004105 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 03 03:19:11 crc kubenswrapper[4746]: I0103 03:19:11.026640 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 03 03:19:12 crc kubenswrapper[4746]: I0103 03:19:12.398325 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 03 03:19:12 crc kubenswrapper[4746]: I0103 03:19:12.510052 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 03 03:19:12 crc kubenswrapper[4746]: I0103 03:19:12.529925 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 03 03:19:13 crc kubenswrapper[4746]: I0103 03:19:13.115941 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 03 03:19:13 crc kubenswrapper[4746]: I0103 03:19:13.321526 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 03 03:19:13 crc kubenswrapper[4746]: I0103 03:19:13.443174 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 03 03:19:13 crc kubenswrapper[4746]: I0103 03:19:13.464693 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 03 03:19:13 crc kubenswrapper[4746]: I0103 03:19:13.668696 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 03 03:19:13 crc kubenswrapper[4746]: I0103 03:19:13.942692 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 03 03:19:14 crc kubenswrapper[4746]: I0103 03:19:14.072246 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 03 03:19:14 crc kubenswrapper[4746]: I0103 03:19:14.342489 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 03 03:19:14 crc kubenswrapper[4746]: I0103 03:19:14.349586 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 03 03:19:14 crc kubenswrapper[4746]: I0103 03:19:14.435289 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 03 03:19:14 crc kubenswrapper[4746]: I0103 03:19:14.463190 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 03 03:19:14 crc kubenswrapper[4746]: I0103 03:19:14.583778 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 03 03:19:14 crc kubenswrapper[4746]: I0103 03:19:14.679830 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 03 03:19:14 crc kubenswrapper[4746]: I0103 03:19:14.812696 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.065098 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.083102 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.208267 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.208553 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.367577 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.399160 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.416336 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.500210 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.523039 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.564573 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.578260 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.616116 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.686489 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.710768 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.752823 4746 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.814357 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.829151 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.829649 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.845499 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.936397 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.944024 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.944476 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 03 03:19:15 crc kubenswrapper[4746]: I0103 03:19:15.969984 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.057794 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.145457 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.159342 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.226676 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.233037 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.239095 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.326809 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.537871 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.661139 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.687499 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.765846 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.829517 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.898023 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 03 03:19:16 crc kubenswrapper[4746]: I0103 03:19:16.977770 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.015096 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.023774 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.149593 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.225532 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.314046 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.439074 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.447882 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.451350 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.473981 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.482029 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.483705 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.520817 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.609107 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.655103 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.718636 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.754583 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.786811 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.861761 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 03 03:19:17 crc kubenswrapper[4746]: I0103 03:19:17.973869 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.062605 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.098600 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.113799 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.170001 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.189823 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.235564 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.266911 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.292365 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.293483 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.296250 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.426197 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.480170 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.491578 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.513378 4746 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.516212 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.573960 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.585768 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.621864 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.632514 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.667564 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.682188 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.705965 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.942304 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.959908 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.962103 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.977526 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 03 03:19:18 crc kubenswrapper[4746]: I0103 03:19:18.979484 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.007051 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.058453 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.126930 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.147800 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.166239 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.167821 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.231005 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.397925 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.512849 4746 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.535146 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.603625 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.635406 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.688532 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.734366 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.742517 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.801836 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.957080 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 03 03:19:19 crc kubenswrapper[4746]: I0103 03:19:19.995067 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.001040 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.089526 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.205718 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.260263 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.284016 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.288331 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.311704 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.416760 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.465716 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.475805 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.500378 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.542632 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.580116 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.633204 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.644682 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.674903 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.754646 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.842405 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.917755 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.945607 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 03 03:19:20 crc kubenswrapper[4746]: I0103 03:19:20.961453 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.048159 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.056769 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.131080 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.221688 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.230619 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.236437 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.320903 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.370705 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.441223 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.562459 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.601389 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.616891 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.765178 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.777281 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.923835 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.944029 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.989210 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 03 03:19:21 crc kubenswrapper[4746]: I0103 03:19:21.996420 4746 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.005673 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.094893 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.243950 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.311950 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.323937 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.393043 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.415169 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.419526 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.430891 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.489293 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.505721 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.550671 4746 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.555152 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.555136888 podStartE2EDuration="43.555136888s" podCreationTimestamp="2026-01-03 03:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:18:58.360337688 +0000 UTC m=+258.210227993" watchObservedRunningTime="2026-01-03 03:19:22.555136888 +0000 UTC m=+282.405027213" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.556044 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-sw9vc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.556102 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-d96b794dc-4t5gr","openshift-kube-apiserver/kube-apiserver-crc"] Jan 03 03:19:22 crc kubenswrapper[4746]: E0103 03:19:22.556318 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" containerName="oauth-openshift" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.556333 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" containerName="oauth-openshift" Jan 03 03:19:22 crc kubenswrapper[4746]: E0103 03:19:22.556357 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" containerName="installer" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.556368 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" containerName="installer" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.556489 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" containerName="oauth-openshift" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.556501 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="25948708-1b20-4dd8-8073-ec76836c38d3" containerName="installer" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.556782 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cb36226-f723-4cc8-b765-07aaa195cd44" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.556809 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cb36226-f723-4cc8-b765-07aaa195cd44" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.557033 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.558980 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.559200 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.559597 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.560090 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.560824 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.560943 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.561422 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.561474 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.561551 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.561640 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.561729 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.561807 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.562007 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.563597 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt6t4\" (UniqueName: \"kubernetes.io/projected/f9531579-e7b9-47a6-8715-19471c10afd1-kube-api-access-gt6t4\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.563645 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9531579-e7b9-47a6-8715-19471c10afd1-audit-policies\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.563701 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.563728 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9531579-e7b9-47a6-8715-19471c10afd1-audit-dir\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.563754 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-user-template-login\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.563816 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-service-ca\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.563841 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.563866 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-session\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.563897 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.563920 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.563955 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.563995 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-router-certs\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.564048 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-user-template-error\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.564082 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.568545 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.571745 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.576457 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.583541 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.610376 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.610353663 podStartE2EDuration="24.610353663s" podCreationTimestamp="2026-01-03 03:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:19:22.603863903 +0000 UTC m=+282.453754228" watchObservedRunningTime="2026-01-03 03:19:22.610353663 +0000 UTC m=+282.460243968" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.641096 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665404 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665463 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-service-ca\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665487 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-session\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665510 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665531 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665555 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665584 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-router-certs\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665622 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-user-template-error\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665640 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665684 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt6t4\" (UniqueName: \"kubernetes.io/projected/f9531579-e7b9-47a6-8715-19471c10afd1-kube-api-access-gt6t4\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665703 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9531579-e7b9-47a6-8715-19471c10afd1-audit-policies\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665727 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665747 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9531579-e7b9-47a6-8715-19471c10afd1-audit-dir\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.665762 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-user-template-login\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.666934 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9531579-e7b9-47a6-8715-19471c10afd1-audit-dir\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.667240 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.668355 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-service-ca\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.669813 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.671444 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.672634 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-router-certs\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.672768 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.672960 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.673088 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f9531579-e7b9-47a6-8715-19471c10afd1-audit-policies\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.673969 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.674472 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-user-template-error\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.675202 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-session\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.675348 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.676628 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f9531579-e7b9-47a6-8715-19471c10afd1-v4-0-config-user-template-login\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.687039 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt6t4\" (UniqueName: \"kubernetes.io/projected/f9531579-e7b9-47a6-8715-19471c10afd1-kube-api-access-gt6t4\") pod \"oauth-openshift-d96b794dc-4t5gr\" (UID: \"f9531579-e7b9-47a6-8715-19471c10afd1\") " pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.723550 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.733339 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.766962 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.778147 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.877793 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.881112 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:22 crc kubenswrapper[4746]: I0103 03:19:22.975594 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 03 03:19:23 crc kubenswrapper[4746]: I0103 03:19:23.040224 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 03 03:19:23 crc kubenswrapper[4746]: I0103 03:19:23.048946 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 03 03:19:23 crc kubenswrapper[4746]: I0103 03:19:23.107363 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 03 03:19:23 crc kubenswrapper[4746]: I0103 03:19:23.240756 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 03 03:19:23 crc kubenswrapper[4746]: I0103 03:19:23.307520 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 03 03:19:23 crc kubenswrapper[4746]: I0103 03:19:23.326586 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 03 03:19:23 crc kubenswrapper[4746]: I0103 03:19:23.327145 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 03 03:19:23 crc kubenswrapper[4746]: I0103 03:19:23.501404 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 03 03:19:23 crc kubenswrapper[4746]: I0103 03:19:23.576559 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 03 03:19:23 crc kubenswrapper[4746]: I0103 03:19:23.816421 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 03 03:19:23 crc kubenswrapper[4746]: I0103 03:19:23.824974 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 03 03:19:23 crc kubenswrapper[4746]: I0103 03:19:23.828007 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.074212 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.189809 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.204473 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.272913 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.298447 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.400224 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.437152 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.470724 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d8cd430-5229-4772-8c83-9fbdbeaf54de" path="/var/lib/kubelet/pods/6d8cd430-5229-4772-8c83-9fbdbeaf54de/volumes" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.512397 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d96b794dc-4t5gr"] Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.518674 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.540642 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.561414 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.585868 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.622398 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.659284 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.686856 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 03 03:19:24 crc kubenswrapper[4746]: I0103 03:19:24.910035 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.006475 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.041932 4746 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.086554 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.150822 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.168883 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d96b794dc-4t5gr"] Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.292151 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.398730 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.426783 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.440161 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" event={"ID":"f9531579-e7b9-47a6-8715-19471c10afd1","Type":"ContainerStarted","Data":"398414eea82150e2f659407c18963d6f0444dde1b4890f9ca36345e4617fd2d8"} Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.520755 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.612205 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.621624 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.634311 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.768715 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.795128 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 03 03:19:25 crc kubenswrapper[4746]: I0103 03:19:25.888083 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.047277 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.348091 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.368295 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.446334 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" event={"ID":"f9531579-e7b9-47a6-8715-19471c10afd1","Type":"ContainerStarted","Data":"93874d7b2a481bd014bd12d4f6bc6e3e5463fa6de88f4badd92e1299fdca9367"} Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.446608 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.452244 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.470940 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-d96b794dc-4t5gr" podStartSLOduration=61.470925764 podStartE2EDuration="1m1.470925764s" podCreationTimestamp="2026-01-03 03:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:19:26.466922196 +0000 UTC m=+286.316812501" watchObservedRunningTime="2026-01-03 03:19:26.470925764 +0000 UTC m=+286.320816069" Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.486991 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.487555 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.628464 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.644335 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.826521 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 03 03:19:26 crc kubenswrapper[4746]: I0103 03:19:26.983731 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 03 03:19:27 crc kubenswrapper[4746]: I0103 03:19:27.133863 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 03 03:19:27 crc kubenswrapper[4746]: I0103 03:19:27.390381 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 03 03:19:27 crc kubenswrapper[4746]: I0103 03:19:27.459136 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 03 03:19:27 crc kubenswrapper[4746]: I0103 03:19:27.481616 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:19:27 crc kubenswrapper[4746]: I0103 03:19:27.481755 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:19:27 crc kubenswrapper[4746]: I0103 03:19:27.486749 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:19:27 crc kubenswrapper[4746]: I0103 03:19:27.578987 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 03 03:19:28 crc kubenswrapper[4746]: I0103 03:19:28.477043 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 03 03:19:28 crc kubenswrapper[4746]: I0103 03:19:28.680963 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 03 03:19:29 crc kubenswrapper[4746]: I0103 03:19:29.377554 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 03 03:19:34 crc kubenswrapper[4746]: I0103 03:19:34.027489 4746 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 03 03:19:34 crc kubenswrapper[4746]: I0103 03:19:34.028040 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8abcfbc2f3d5e34a617079abe06b0f864dada27bd01712ce883da294a69aaed0" gracePeriod=5 Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.525836 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.527783 4746 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8abcfbc2f3d5e34a617079abe06b0f864dada27bd01712ce883da294a69aaed0" exitCode=137 Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.602305 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.602621 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.693071 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.693137 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.693260 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.693305 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.693372 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.694048 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.694085 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.694048 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.694046 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.704311 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.794570 4746 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.794603 4746 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.794614 4746 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.794625 4746 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:39 crc kubenswrapper[4746]: I0103 03:19:39.794636 4746 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:40 crc kubenswrapper[4746]: I0103 03:19:40.307428 4746 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 03 03:19:40 crc kubenswrapper[4746]: I0103 03:19:40.476270 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 03 03:19:40 crc kubenswrapper[4746]: I0103 03:19:40.476552 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 03 03:19:40 crc kubenswrapper[4746]: I0103 03:19:40.494887 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 03 03:19:40 crc kubenswrapper[4746]: I0103 03:19:40.494973 4746 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0c000291-26c7-4871-9597-7e85fd6fd467" Jan 03 03:19:40 crc kubenswrapper[4746]: I0103 03:19:40.501404 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 03 03:19:40 crc kubenswrapper[4746]: I0103 03:19:40.501541 4746 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0c000291-26c7-4871-9597-7e85fd6fd467" Jan 03 03:19:40 crc kubenswrapper[4746]: I0103 03:19:40.534177 4746 scope.go:117] "RemoveContainer" containerID="8abcfbc2f3d5e34a617079abe06b0f864dada27bd01712ce883da294a69aaed0" Jan 03 03:19:40 crc kubenswrapper[4746]: I0103 03:19:40.534207 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 03 03:19:43 crc kubenswrapper[4746]: I0103 03:19:43.551021 4746 generic.go:334] "Generic (PLEG): container finished" podID="c56b0f70-ca3e-431d-88f4-d7f518b67e9c" containerID="67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17" exitCode=0 Jan 03 03:19:43 crc kubenswrapper[4746]: I0103 03:19:43.551125 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" event={"ID":"c56b0f70-ca3e-431d-88f4-d7f518b67e9c","Type":"ContainerDied","Data":"67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17"} Jan 03 03:19:43 crc kubenswrapper[4746]: I0103 03:19:43.552641 4746 scope.go:117] "RemoveContainer" containerID="67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17" Jan 03 03:19:44 crc kubenswrapper[4746]: I0103 03:19:44.559465 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" event={"ID":"c56b0f70-ca3e-431d-88f4-d7f518b67e9c","Type":"ContainerStarted","Data":"af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8"} Jan 03 03:19:44 crc kubenswrapper[4746]: I0103 03:19:44.559864 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:19:44 crc kubenswrapper[4746]: I0103 03:19:44.562592 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:19:55 crc kubenswrapper[4746]: I0103 03:19:55.735346 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-58c52"] Jan 03 03:19:55 crc kubenswrapper[4746]: I0103 03:19:55.736431 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" podUID="45e8e97f-f055-4a33-94fa-687aa5893d06" containerName="controller-manager" containerID="cri-o://eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270" gracePeriod=30 Jan 03 03:19:55 crc kubenswrapper[4746]: I0103 03:19:55.830117 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg"] Jan 03 03:19:55 crc kubenswrapper[4746]: I0103 03:19:55.830865 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" podUID="3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8" containerName="route-controller-manager" containerID="cri-o://fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98" gracePeriod=30 Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.103484 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.179315 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.234586 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45e8e97f-f055-4a33-94fa-687aa5893d06-serving-cert\") pod \"45e8e97f-f055-4a33-94fa-687aa5893d06\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.234695 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkzdm\" (UniqueName: \"kubernetes.io/projected/45e8e97f-f055-4a33-94fa-687aa5893d06-kube-api-access-fkzdm\") pod \"45e8e97f-f055-4a33-94fa-687aa5893d06\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.234768 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-config\") pod \"45e8e97f-f055-4a33-94fa-687aa5893d06\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.234896 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-proxy-ca-bundles\") pod \"45e8e97f-f055-4a33-94fa-687aa5893d06\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.234949 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-client-ca\") pod \"45e8e97f-f055-4a33-94fa-687aa5893d06\" (UID: \"45e8e97f-f055-4a33-94fa-687aa5893d06\") " Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.235389 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "45e8e97f-f055-4a33-94fa-687aa5893d06" (UID: "45e8e97f-f055-4a33-94fa-687aa5893d06"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.235464 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-config" (OuterVolumeSpecName: "config") pod "45e8e97f-f055-4a33-94fa-687aa5893d06" (UID: "45e8e97f-f055-4a33-94fa-687aa5893d06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.235928 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-client-ca" (OuterVolumeSpecName: "client-ca") pod "45e8e97f-f055-4a33-94fa-687aa5893d06" (UID: "45e8e97f-f055-4a33-94fa-687aa5893d06"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.242460 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e8e97f-f055-4a33-94fa-687aa5893d06-kube-api-access-fkzdm" (OuterVolumeSpecName: "kube-api-access-fkzdm") pod "45e8e97f-f055-4a33-94fa-687aa5893d06" (UID: "45e8e97f-f055-4a33-94fa-687aa5893d06"). InnerVolumeSpecName "kube-api-access-fkzdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.242933 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e8e97f-f055-4a33-94fa-687aa5893d06-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "45e8e97f-f055-4a33-94fa-687aa5893d06" (UID: "45e8e97f-f055-4a33-94fa-687aa5893d06"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.335941 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqprk\" (UniqueName: \"kubernetes.io/projected/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-kube-api-access-vqprk\") pod \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.336072 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-client-ca\") pod \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.336118 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-serving-cert\") pod \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.336151 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-config\") pod \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\" (UID: \"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8\") " Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.336562 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.336584 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.336599 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45e8e97f-f055-4a33-94fa-687aa5893d06-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.336612 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkzdm\" (UniqueName: \"kubernetes.io/projected/45e8e97f-f055-4a33-94fa-687aa5893d06-kube-api-access-fkzdm\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.336628 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e8e97f-f055-4a33-94fa-687aa5893d06-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.336862 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8" (UID: "3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.337456 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-config" (OuterVolumeSpecName: "config") pod "3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8" (UID: "3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.341629 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-kube-api-access-vqprk" (OuterVolumeSpecName: "kube-api-access-vqprk") pod "3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8" (UID: "3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8"). InnerVolumeSpecName "kube-api-access-vqprk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.342132 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8" (UID: "3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.438075 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqprk\" (UniqueName: \"kubernetes.io/projected/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-kube-api-access-vqprk\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.438109 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.438119 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.438129 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.639166 4746 generic.go:334] "Generic (PLEG): container finished" podID="45e8e97f-f055-4a33-94fa-687aa5893d06" containerID="eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270" exitCode=0 Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.639225 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" event={"ID":"45e8e97f-f055-4a33-94fa-687aa5893d06","Type":"ContainerDied","Data":"eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270"} Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.639695 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" event={"ID":"45e8e97f-f055-4a33-94fa-687aa5893d06","Type":"ContainerDied","Data":"d6c7b5e041edae07a9e94ed0d4d4e8fe1d7555b01f6bfeaf565df578f49d26f1"} Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.639764 4746 scope.go:117] "RemoveContainer" containerID="eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.639252 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-58c52" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.643056 4746 generic.go:334] "Generic (PLEG): container finished" podID="3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8" containerID="fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98" exitCode=0 Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.643109 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" event={"ID":"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8","Type":"ContainerDied","Data":"fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98"} Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.643144 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" event={"ID":"3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8","Type":"ContainerDied","Data":"58fd30e776e425a56eee45b4559c7ccc0314a4102496aa5153736dc167b740fa"} Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.643116 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.663821 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg"] Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.666583 4746 scope.go:117] "RemoveContainer" containerID="eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270" Jan 03 03:19:56 crc kubenswrapper[4746]: E0103 03:19:56.670440 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270\": container with ID starting with eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270 not found: ID does not exist" containerID="eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.670477 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270"} err="failed to get container status \"eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270\": rpc error: code = NotFound desc = could not find container \"eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270\": container with ID starting with eb08a39d1243749189cc436f72ea4ed09fd5074b8592905c304b34c2ddcf8270 not found: ID does not exist" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.670503 4746 scope.go:117] "RemoveContainer" containerID="fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.671698 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-h87hg"] Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.679003 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-58c52"] Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.683466 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-58c52"] Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.698137 4746 scope.go:117] "RemoveContainer" containerID="fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98" Jan 03 03:19:56 crc kubenswrapper[4746]: E0103 03:19:56.698488 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98\": container with ID starting with fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98 not found: ID does not exist" containerID="fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98" Jan 03 03:19:56 crc kubenswrapper[4746]: I0103 03:19:56.698518 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98"} err="failed to get container status \"fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98\": rpc error: code = NotFound desc = could not find container \"fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98\": container with ID starting with fe7c76693c3d956dac2ad26907ae3c5d01174dd021ef89f8b06c3b70f8220b98 not found: ID does not exist" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.158195 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr"] Jan 03 03:19:57 crc kubenswrapper[4746]: E0103 03:19:57.158490 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8" containerName="route-controller-manager" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.158508 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8" containerName="route-controller-manager" Jan 03 03:19:57 crc kubenswrapper[4746]: E0103 03:19:57.158522 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e8e97f-f055-4a33-94fa-687aa5893d06" containerName="controller-manager" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.158532 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e8e97f-f055-4a33-94fa-687aa5893d06" containerName="controller-manager" Jan 03 03:19:57 crc kubenswrapper[4746]: E0103 03:19:57.158548 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.158556 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.158704 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e8e97f-f055-4a33-94fa-687aa5893d06" containerName="controller-manager" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.158719 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8" containerName="route-controller-manager" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.158735 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.159270 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.162301 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.162981 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.162973 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.166368 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.166429 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.166476 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.183496 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.188028 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr"] Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.196317 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g"] Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.197555 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.200207 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.200346 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.200530 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.201860 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.202740 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.205988 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.229801 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g"] Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.258859 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24mc6\" (UniqueName: \"kubernetes.io/projected/1a7cca94-fc2f-452b-b76c-f673ee0cee46-kube-api-access-24mc6\") pod \"route-controller-manager-6d9cfd5c64-xfb4g\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.258949 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-config\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.258997 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj997\" (UniqueName: \"kubernetes.io/projected/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-kube-api-access-rj997\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.259045 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-proxy-ca-bundles\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.259075 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a7cca94-fc2f-452b-b76c-f673ee0cee46-client-ca\") pod \"route-controller-manager-6d9cfd5c64-xfb4g\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.259128 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-serving-cert\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.259158 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7cca94-fc2f-452b-b76c-f673ee0cee46-config\") pod \"route-controller-manager-6d9cfd5c64-xfb4g\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.259194 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-client-ca\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.259261 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7cca94-fc2f-452b-b76c-f673ee0cee46-serving-cert\") pod \"route-controller-manager-6d9cfd5c64-xfb4g\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.311185 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr"] Jan 03 03:19:57 crc kubenswrapper[4746]: E0103 03:19:57.311712 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-rj997 proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" podUID="289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.335120 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g"] Jan 03 03:19:57 crc kubenswrapper[4746]: E0103 03:19:57.335511 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-24mc6 serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" podUID="1a7cca94-fc2f-452b-b76c-f673ee0cee46" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.359680 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24mc6\" (UniqueName: \"kubernetes.io/projected/1a7cca94-fc2f-452b-b76c-f673ee0cee46-kube-api-access-24mc6\") pod \"route-controller-manager-6d9cfd5c64-xfb4g\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.359868 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-config\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.360013 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj997\" (UniqueName: \"kubernetes.io/projected/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-kube-api-access-rj997\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.360094 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-proxy-ca-bundles\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.360121 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a7cca94-fc2f-452b-b76c-f673ee0cee46-client-ca\") pod \"route-controller-manager-6d9cfd5c64-xfb4g\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.360168 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-serving-cert\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.360190 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7cca94-fc2f-452b-b76c-f673ee0cee46-config\") pod \"route-controller-manager-6d9cfd5c64-xfb4g\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.360229 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-client-ca\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.360350 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7cca94-fc2f-452b-b76c-f673ee0cee46-serving-cert\") pod \"route-controller-manager-6d9cfd5c64-xfb4g\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.361296 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-proxy-ca-bundles\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.361423 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-client-ca\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.361443 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a7cca94-fc2f-452b-b76c-f673ee0cee46-client-ca\") pod \"route-controller-manager-6d9cfd5c64-xfb4g\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.361523 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-config\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.361836 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7cca94-fc2f-452b-b76c-f673ee0cee46-config\") pod \"route-controller-manager-6d9cfd5c64-xfb4g\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.365699 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7cca94-fc2f-452b-b76c-f673ee0cee46-serving-cert\") pod \"route-controller-manager-6d9cfd5c64-xfb4g\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.365777 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-serving-cert\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.380443 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24mc6\" (UniqueName: \"kubernetes.io/projected/1a7cca94-fc2f-452b-b76c-f673ee0cee46-kube-api-access-24mc6\") pod \"route-controller-manager-6d9cfd5c64-xfb4g\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.383112 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj997\" (UniqueName: \"kubernetes.io/projected/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-kube-api-access-rj997\") pod \"controller-manager-7d5957c6c6-g8bpr\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.653421 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.653455 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.662441 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.665455 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24mc6\" (UniqueName: \"kubernetes.io/projected/1a7cca94-fc2f-452b-b76c-f673ee0cee46-kube-api-access-24mc6\") pod \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.665671 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7cca94-fc2f-452b-b76c-f673ee0cee46-serving-cert\") pod \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.665750 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a7cca94-fc2f-452b-b76c-f673ee0cee46-client-ca\") pod \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.665797 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7cca94-fc2f-452b-b76c-f673ee0cee46-config\") pod \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\" (UID: \"1a7cca94-fc2f-452b-b76c-f673ee0cee46\") " Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.666467 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7cca94-fc2f-452b-b76c-f673ee0cee46-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a7cca94-fc2f-452b-b76c-f673ee0cee46" (UID: "1a7cca94-fc2f-452b-b76c-f673ee0cee46"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.666607 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7cca94-fc2f-452b-b76c-f673ee0cee46-config" (OuterVolumeSpecName: "config") pod "1a7cca94-fc2f-452b-b76c-f673ee0cee46" (UID: "1a7cca94-fc2f-452b-b76c-f673ee0cee46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.668289 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.668975 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7cca94-fc2f-452b-b76c-f673ee0cee46-kube-api-access-24mc6" (OuterVolumeSpecName: "kube-api-access-24mc6") pod "1a7cca94-fc2f-452b-b76c-f673ee0cee46" (UID: "1a7cca94-fc2f-452b-b76c-f673ee0cee46"). InnerVolumeSpecName "kube-api-access-24mc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.669537 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7cca94-fc2f-452b-b76c-f673ee0cee46-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a7cca94-fc2f-452b-b76c-f673ee0cee46" (UID: "1a7cca94-fc2f-452b-b76c-f673ee0cee46"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.766534 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-proxy-ca-bundles\") pod \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.766906 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-config\") pod \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.766934 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-serving-cert\") pod \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.766963 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj997\" (UniqueName: \"kubernetes.io/projected/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-kube-api-access-rj997\") pod \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.767007 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-client-ca\") pod \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\" (UID: \"289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1\") " Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.767207 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7cca94-fc2f-452b-b76c-f673ee0cee46-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.767224 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a7cca94-fc2f-452b-b76c-f673ee0cee46-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.767236 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7cca94-fc2f-452b-b76c-f673ee0cee46-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.767248 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24mc6\" (UniqueName: \"kubernetes.io/projected/1a7cca94-fc2f-452b-b76c-f673ee0cee46-kube-api-access-24mc6\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.767642 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1" (UID: "289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.767766 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-client-ca" (OuterVolumeSpecName: "client-ca") pod "289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1" (UID: "289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.768131 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-config" (OuterVolumeSpecName: "config") pod "289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1" (UID: "289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.770180 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1" (UID: "289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.770209 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-kube-api-access-rj997" (OuterVolumeSpecName: "kube-api-access-rj997") pod "289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1" (UID: "289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1"). InnerVolumeSpecName "kube-api-access-rj997". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.868340 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.868391 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.868411 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.868433 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj997\" (UniqueName: \"kubernetes.io/projected/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-kube-api-access-rj997\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:57 crc kubenswrapper[4746]: I0103 03:19:57.868455 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.480093 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8" path="/var/lib/kubelet/pods/3da2fcf5-fcfe-4efe-9b43-c8e4bf2589c8/volumes" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.481396 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e8e97f-f055-4a33-94fa-687aa5893d06" path="/var/lib/kubelet/pods/45e8e97f-f055-4a33-94fa-687aa5893d06/volumes" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.657297 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.657317 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.693455 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr"] Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.698902 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5db558bd57-wl28d"] Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.699734 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.703227 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.703238 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.703632 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.703819 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.703856 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.703981 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.705795 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d5957c6c6-g8bpr"] Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.712169 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.713585 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5db558bd57-wl28d"] Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.730435 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g"] Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.739609 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-xfb4g"] Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.883328 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53e97c29-e40b-49af-bc9c-0fb6b36800fe-serving-cert\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.883388 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-proxy-ca-bundles\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.883442 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-client-ca\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.883644 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-config\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.883702 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8nxx\" (UniqueName: \"kubernetes.io/projected/53e97c29-e40b-49af-bc9c-0fb6b36800fe-kube-api-access-w8nxx\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.985831 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-client-ca\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.985958 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-config\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.986014 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8nxx\" (UniqueName: \"kubernetes.io/projected/53e97c29-e40b-49af-bc9c-0fb6b36800fe-kube-api-access-w8nxx\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.986130 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53e97c29-e40b-49af-bc9c-0fb6b36800fe-serving-cert\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.986170 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-proxy-ca-bundles\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.986918 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-client-ca\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.988011 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-config\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.988073 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-proxy-ca-bundles\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:58 crc kubenswrapper[4746]: I0103 03:19:58.995366 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53e97c29-e40b-49af-bc9c-0fb6b36800fe-serving-cert\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:59 crc kubenswrapper[4746]: I0103 03:19:59.012016 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8nxx\" (UniqueName: \"kubernetes.io/projected/53e97c29-e40b-49af-bc9c-0fb6b36800fe-kube-api-access-w8nxx\") pod \"controller-manager-5db558bd57-wl28d\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:59 crc kubenswrapper[4746]: I0103 03:19:59.034110 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:59 crc kubenswrapper[4746]: I0103 03:19:59.288410 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5db558bd57-wl28d"] Jan 03 03:19:59 crc kubenswrapper[4746]: I0103 03:19:59.666641 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" event={"ID":"53e97c29-e40b-49af-bc9c-0fb6b36800fe","Type":"ContainerStarted","Data":"e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663"} Jan 03 03:19:59 crc kubenswrapper[4746]: I0103 03:19:59.666751 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" event={"ID":"53e97c29-e40b-49af-bc9c-0fb6b36800fe","Type":"ContainerStarted","Data":"c1bea88bb6bc05223bd53113489b772d8a0bad200a304fc42ab87121e3994fde"} Jan 03 03:19:59 crc kubenswrapper[4746]: I0103 03:19:59.667186 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:59 crc kubenswrapper[4746]: I0103 03:19:59.674378 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:19:59 crc kubenswrapper[4746]: I0103 03:19:59.723114 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" podStartSLOduration=2.723071975 podStartE2EDuration="2.723071975s" podCreationTimestamp="2026-01-03 03:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:19:59.716126064 +0000 UTC m=+319.566016369" watchObservedRunningTime="2026-01-03 03:19:59.723071975 +0000 UTC m=+319.572962330" Jan 03 03:20:00 crc kubenswrapper[4746]: I0103 03:20:00.472238 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a7cca94-fc2f-452b-b76c-f673ee0cee46" path="/var/lib/kubelet/pods/1a7cca94-fc2f-452b-b76c-f673ee0cee46/volumes" Jan 03 03:20:00 crc kubenswrapper[4746]: I0103 03:20:00.472908 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1" path="/var/lib/kubelet/pods/289dcbe7-a1d2-4036-a2f7-3bd8edfcc5d1/volumes" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.373522 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.373630 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.375922 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh"] Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.378044 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.382119 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.382454 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.383063 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.383187 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.384055 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.384361 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.395955 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh"] Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.426919 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/777d575e-c055-42ca-be78-20531992e780-client-ca\") pod \"route-controller-manager-69c79dd4cc-kfzkh\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.427281 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/777d575e-c055-42ca-be78-20531992e780-config\") pod \"route-controller-manager-69c79dd4cc-kfzkh\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.427790 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/777d575e-c055-42ca-be78-20531992e780-serving-cert\") pod \"route-controller-manager-69c79dd4cc-kfzkh\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.427918 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvsc5\" (UniqueName: \"kubernetes.io/projected/777d575e-c055-42ca-be78-20531992e780-kube-api-access-rvsc5\") pod \"route-controller-manager-69c79dd4cc-kfzkh\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.529913 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/777d575e-c055-42ca-be78-20531992e780-client-ca\") pod \"route-controller-manager-69c79dd4cc-kfzkh\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.530001 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/777d575e-c055-42ca-be78-20531992e780-config\") pod \"route-controller-manager-69c79dd4cc-kfzkh\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.530101 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/777d575e-c055-42ca-be78-20531992e780-serving-cert\") pod \"route-controller-manager-69c79dd4cc-kfzkh\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.530137 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvsc5\" (UniqueName: \"kubernetes.io/projected/777d575e-c055-42ca-be78-20531992e780-kube-api-access-rvsc5\") pod \"route-controller-manager-69c79dd4cc-kfzkh\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.531845 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/777d575e-c055-42ca-be78-20531992e780-client-ca\") pod \"route-controller-manager-69c79dd4cc-kfzkh\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.532491 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/777d575e-c055-42ca-be78-20531992e780-config\") pod \"route-controller-manager-69c79dd4cc-kfzkh\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.548193 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/777d575e-c055-42ca-be78-20531992e780-serving-cert\") pod \"route-controller-manager-69c79dd4cc-kfzkh\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.554251 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvsc5\" (UniqueName: \"kubernetes.io/projected/777d575e-c055-42ca-be78-20531992e780-kube-api-access-rvsc5\") pod \"route-controller-manager-69c79dd4cc-kfzkh\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.705401 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:01 crc kubenswrapper[4746]: I0103 03:20:01.971447 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh"] Jan 03 03:20:02 crc kubenswrapper[4746]: I0103 03:20:02.688144 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" event={"ID":"777d575e-c055-42ca-be78-20531992e780","Type":"ContainerStarted","Data":"281f1d68c3cb563e59af8c42ba3e793b8ffb009dedfcb946f5470c373d5a08c2"} Jan 03 03:20:02 crc kubenswrapper[4746]: I0103 03:20:02.688773 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" event={"ID":"777d575e-c055-42ca-be78-20531992e780","Type":"ContainerStarted","Data":"7f3f6fb3edca39c567bb60bffd814995e4cba8098f50894c3d9f26fc5ed1d090"} Jan 03 03:20:02 crc kubenswrapper[4746]: I0103 03:20:02.688793 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:02 crc kubenswrapper[4746]: I0103 03:20:02.697728 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:02 crc kubenswrapper[4746]: I0103 03:20:02.712620 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" podStartSLOduration=5.712593716 podStartE2EDuration="5.712593716s" podCreationTimestamp="2026-01-03 03:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:20:02.707288135 +0000 UTC m=+322.557178480" watchObservedRunningTime="2026-01-03 03:20:02.712593716 +0000 UTC m=+322.562484051" Jan 03 03:20:31 crc kubenswrapper[4746]: I0103 03:20:31.374156 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:20:31 crc kubenswrapper[4746]: I0103 03:20:31.375089 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:20:35 crc kubenswrapper[4746]: I0103 03:20:35.715801 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh"] Jan 03 03:20:35 crc kubenswrapper[4746]: I0103 03:20:35.716453 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" podUID="777d575e-c055-42ca-be78-20531992e780" containerName="route-controller-manager" containerID="cri-o://281f1d68c3cb563e59af8c42ba3e793b8ffb009dedfcb946f5470c373d5a08c2" gracePeriod=30 Jan 03 03:20:35 crc kubenswrapper[4746]: I0103 03:20:35.911731 4746 generic.go:334] "Generic (PLEG): container finished" podID="777d575e-c055-42ca-be78-20531992e780" containerID="281f1d68c3cb563e59af8c42ba3e793b8ffb009dedfcb946f5470c373d5a08c2" exitCode=0 Jan 03 03:20:35 crc kubenswrapper[4746]: I0103 03:20:35.911918 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" event={"ID":"777d575e-c055-42ca-be78-20531992e780","Type":"ContainerDied","Data":"281f1d68c3cb563e59af8c42ba3e793b8ffb009dedfcb946f5470c373d5a08c2"} Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.075607 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.182617 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/777d575e-c055-42ca-be78-20531992e780-client-ca\") pod \"777d575e-c055-42ca-be78-20531992e780\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.182741 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/777d575e-c055-42ca-be78-20531992e780-config\") pod \"777d575e-c055-42ca-be78-20531992e780\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.182870 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvsc5\" (UniqueName: \"kubernetes.io/projected/777d575e-c055-42ca-be78-20531992e780-kube-api-access-rvsc5\") pod \"777d575e-c055-42ca-be78-20531992e780\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.182932 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/777d575e-c055-42ca-be78-20531992e780-serving-cert\") pod \"777d575e-c055-42ca-be78-20531992e780\" (UID: \"777d575e-c055-42ca-be78-20531992e780\") " Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.183627 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/777d575e-c055-42ca-be78-20531992e780-client-ca" (OuterVolumeSpecName: "client-ca") pod "777d575e-c055-42ca-be78-20531992e780" (UID: "777d575e-c055-42ca-be78-20531992e780"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.184365 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/777d575e-c055-42ca-be78-20531992e780-config" (OuterVolumeSpecName: "config") pod "777d575e-c055-42ca-be78-20531992e780" (UID: "777d575e-c055-42ca-be78-20531992e780"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.188561 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/777d575e-c055-42ca-be78-20531992e780-kube-api-access-rvsc5" (OuterVolumeSpecName: "kube-api-access-rvsc5") pod "777d575e-c055-42ca-be78-20531992e780" (UID: "777d575e-c055-42ca-be78-20531992e780"). InnerVolumeSpecName "kube-api-access-rvsc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.188582 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777d575e-c055-42ca-be78-20531992e780-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "777d575e-c055-42ca-be78-20531992e780" (UID: "777d575e-c055-42ca-be78-20531992e780"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.284252 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvsc5\" (UniqueName: \"kubernetes.io/projected/777d575e-c055-42ca-be78-20531992e780-kube-api-access-rvsc5\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.284291 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/777d575e-c055-42ca-be78-20531992e780-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.284304 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/777d575e-c055-42ca-be78-20531992e780-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.284316 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/777d575e-c055-42ca-be78-20531992e780-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.920094 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" event={"ID":"777d575e-c055-42ca-be78-20531992e780","Type":"ContainerDied","Data":"7f3f6fb3edca39c567bb60bffd814995e4cba8098f50894c3d9f26fc5ed1d090"} Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.920175 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh" Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.920547 4746 scope.go:117] "RemoveContainer" containerID="281f1d68c3cb563e59af8c42ba3e793b8ffb009dedfcb946f5470c373d5a08c2" Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.943335 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh"] Jan 03 03:20:36 crc kubenswrapper[4746]: I0103 03:20:36.951408 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c79dd4cc-kfzkh"] Jan 03 03:20:37 crc kubenswrapper[4746]: I0103 03:20:37.399119 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq"] Jan 03 03:20:37 crc kubenswrapper[4746]: E0103 03:20:37.399606 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777d575e-c055-42ca-be78-20531992e780" containerName="route-controller-manager" Jan 03 03:20:37 crc kubenswrapper[4746]: I0103 03:20:37.399642 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="777d575e-c055-42ca-be78-20531992e780" containerName="route-controller-manager" Jan 03 03:20:37 crc kubenswrapper[4746]: I0103 03:20:37.399978 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="777d575e-c055-42ca-be78-20531992e780" containerName="route-controller-manager" Jan 03 03:20:37 crc kubenswrapper[4746]: I0103 03:20:37.400762 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:37 crc kubenswrapper[4746]: I0103 03:20:37.405841 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 03 03:20:37 crc kubenswrapper[4746]: I0103 03:20:37.406534 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 03 03:20:37 crc kubenswrapper[4746]: I0103 03:20:37.406907 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 03 03:20:37 crc kubenswrapper[4746]: I0103 03:20:37.407118 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.179762 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.180132 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.184246 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq"] Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.186109 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5352051-0d7d-4e10-a1cc-6ead47f5833c-serving-cert\") pod \"route-controller-manager-6d9cfd5c64-mrfqq\" (UID: \"c5352051-0d7d-4e10-a1cc-6ead47f5833c\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.186150 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5352051-0d7d-4e10-a1cc-6ead47f5833c-client-ca\") pod \"route-controller-manager-6d9cfd5c64-mrfqq\" (UID: \"c5352051-0d7d-4e10-a1cc-6ead47f5833c\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.186269 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj2m2\" (UniqueName: \"kubernetes.io/projected/c5352051-0d7d-4e10-a1cc-6ead47f5833c-kube-api-access-gj2m2\") pod \"route-controller-manager-6d9cfd5c64-mrfqq\" (UID: \"c5352051-0d7d-4e10-a1cc-6ead47f5833c\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.186310 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5352051-0d7d-4e10-a1cc-6ead47f5833c-config\") pod \"route-controller-manager-6d9cfd5c64-mrfqq\" (UID: \"c5352051-0d7d-4e10-a1cc-6ead47f5833c\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.288234 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5352051-0d7d-4e10-a1cc-6ead47f5833c-serving-cert\") pod \"route-controller-manager-6d9cfd5c64-mrfqq\" (UID: \"c5352051-0d7d-4e10-a1cc-6ead47f5833c\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.288308 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5352051-0d7d-4e10-a1cc-6ead47f5833c-client-ca\") pod \"route-controller-manager-6d9cfd5c64-mrfqq\" (UID: \"c5352051-0d7d-4e10-a1cc-6ead47f5833c\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.288453 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj2m2\" (UniqueName: \"kubernetes.io/projected/c5352051-0d7d-4e10-a1cc-6ead47f5833c-kube-api-access-gj2m2\") pod \"route-controller-manager-6d9cfd5c64-mrfqq\" (UID: \"c5352051-0d7d-4e10-a1cc-6ead47f5833c\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.288524 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5352051-0d7d-4e10-a1cc-6ead47f5833c-config\") pod \"route-controller-manager-6d9cfd5c64-mrfqq\" (UID: \"c5352051-0d7d-4e10-a1cc-6ead47f5833c\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.289888 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5352051-0d7d-4e10-a1cc-6ead47f5833c-client-ca\") pod \"route-controller-manager-6d9cfd5c64-mrfqq\" (UID: \"c5352051-0d7d-4e10-a1cc-6ead47f5833c\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.290901 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5352051-0d7d-4e10-a1cc-6ead47f5833c-config\") pod \"route-controller-manager-6d9cfd5c64-mrfqq\" (UID: \"c5352051-0d7d-4e10-a1cc-6ead47f5833c\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.294421 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5352051-0d7d-4e10-a1cc-6ead47f5833c-serving-cert\") pod \"route-controller-manager-6d9cfd5c64-mrfqq\" (UID: \"c5352051-0d7d-4e10-a1cc-6ead47f5833c\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.306630 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj2m2\" (UniqueName: \"kubernetes.io/projected/c5352051-0d7d-4e10-a1cc-6ead47f5833c-kube-api-access-gj2m2\") pod \"route-controller-manager-6d9cfd5c64-mrfqq\" (UID: \"c5352051-0d7d-4e10-a1cc-6ead47f5833c\") " pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.471163 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="777d575e-c055-42ca-be78-20531992e780" path="/var/lib/kubelet/pods/777d575e-c055-42ca-be78-20531992e780/volumes" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.507026 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:38 crc kubenswrapper[4746]: I0103 03:20:38.898338 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq"] Jan 03 03:20:39 crc kubenswrapper[4746]: I0103 03:20:39.211818 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" event={"ID":"c5352051-0d7d-4e10-a1cc-6ead47f5833c","Type":"ContainerStarted","Data":"90c019f29c6d1f3a304d6907377a0e014fff7099d3568b6237ff1dd8ff5064a9"} Jan 03 03:20:39 crc kubenswrapper[4746]: I0103 03:20:39.213368 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" event={"ID":"c5352051-0d7d-4e10-a1cc-6ead47f5833c","Type":"ContainerStarted","Data":"d3fefe2a9299f14c870083824db28ee285453583a5fc211675c4b2a5544e0b30"} Jan 03 03:20:39 crc kubenswrapper[4746]: I0103 03:20:39.215803 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:39 crc kubenswrapper[4746]: I0103 03:20:39.232301 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" podStartSLOduration=4.232281667 podStartE2EDuration="4.232281667s" podCreationTimestamp="2026-01-03 03:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:20:39.232222995 +0000 UTC m=+359.082113290" watchObservedRunningTime="2026-01-03 03:20:39.232281667 +0000 UTC m=+359.082171972" Jan 03 03:20:39 crc kubenswrapper[4746]: I0103 03:20:39.367218 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d9cfd5c64-mrfqq" Jan 03 03:20:55 crc kubenswrapper[4746]: I0103 03:20:55.709525 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5db558bd57-wl28d"] Jan 03 03:20:55 crc kubenswrapper[4746]: I0103 03:20:55.711254 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" podUID="53e97c29-e40b-49af-bc9c-0fb6b36800fe" containerName="controller-manager" containerID="cri-o://e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663" gracePeriod=30 Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.077203 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.228867 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8nxx\" (UniqueName: \"kubernetes.io/projected/53e97c29-e40b-49af-bc9c-0fb6b36800fe-kube-api-access-w8nxx\") pod \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.228957 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53e97c29-e40b-49af-bc9c-0fb6b36800fe-serving-cert\") pod \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.229184 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-client-ca\") pod \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.229287 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-config\") pod \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.229360 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-proxy-ca-bundles\") pod \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\" (UID: \"53e97c29-e40b-49af-bc9c-0fb6b36800fe\") " Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.230017 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "53e97c29-e40b-49af-bc9c-0fb6b36800fe" (UID: "53e97c29-e40b-49af-bc9c-0fb6b36800fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.230069 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "53e97c29-e40b-49af-bc9c-0fb6b36800fe" (UID: "53e97c29-e40b-49af-bc9c-0fb6b36800fe"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.230149 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-config" (OuterVolumeSpecName: "config") pod "53e97c29-e40b-49af-bc9c-0fb6b36800fe" (UID: "53e97c29-e40b-49af-bc9c-0fb6b36800fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.234757 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e97c29-e40b-49af-bc9c-0fb6b36800fe-kube-api-access-w8nxx" (OuterVolumeSpecName: "kube-api-access-w8nxx") pod "53e97c29-e40b-49af-bc9c-0fb6b36800fe" (UID: "53e97c29-e40b-49af-bc9c-0fb6b36800fe"). InnerVolumeSpecName "kube-api-access-w8nxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.237748 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e97c29-e40b-49af-bc9c-0fb6b36800fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "53e97c29-e40b-49af-bc9c-0fb6b36800fe" (UID: "53e97c29-e40b-49af-bc9c-0fb6b36800fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.323558 4746 generic.go:334] "Generic (PLEG): container finished" podID="53e97c29-e40b-49af-bc9c-0fb6b36800fe" containerID="e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663" exitCode=0 Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.323602 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" event={"ID":"53e97c29-e40b-49af-bc9c-0fb6b36800fe","Type":"ContainerDied","Data":"e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663"} Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.323638 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" event={"ID":"53e97c29-e40b-49af-bc9c-0fb6b36800fe","Type":"ContainerDied","Data":"c1bea88bb6bc05223bd53113489b772d8a0bad200a304fc42ab87121e3994fde"} Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.323678 4746 scope.go:117] "RemoveContainer" containerID="e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.323806 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5db558bd57-wl28d" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.332495 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.332538 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.332552 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8nxx\" (UniqueName: \"kubernetes.io/projected/53e97c29-e40b-49af-bc9c-0fb6b36800fe-kube-api-access-w8nxx\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.332563 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53e97c29-e40b-49af-bc9c-0fb6b36800fe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.332577 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53e97c29-e40b-49af-bc9c-0fb6b36800fe-client-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.361546 4746 scope.go:117] "RemoveContainer" containerID="e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.361850 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5db558bd57-wl28d"] Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.365041 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5db558bd57-wl28d"] Jan 03 03:20:56 crc kubenswrapper[4746]: E0103 03:20:56.370131 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663\": container with ID starting with e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663 not found: ID does not exist" containerID="e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.370176 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663"} err="failed to get container status \"e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663\": rpc error: code = NotFound desc = could not find container \"e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663\": container with ID starting with e5a3c6add8be1f19f7b291bbf810337551144133956c4f347ced4560f79dd663 not found: ID does not exist" Jan 03 03:20:56 crc kubenswrapper[4746]: I0103 03:20:56.470900 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e97c29-e40b-49af-bc9c-0fb6b36800fe" path="/var/lib/kubelet/pods/53e97c29-e40b-49af-bc9c-0fb6b36800fe/volumes" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.412438 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6"] Jan 03 03:20:57 crc kubenswrapper[4746]: E0103 03:20:57.413358 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e97c29-e40b-49af-bc9c-0fb6b36800fe" containerName="controller-manager" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.413376 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e97c29-e40b-49af-bc9c-0fb6b36800fe" containerName="controller-manager" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.413632 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e97c29-e40b-49af-bc9c-0fb6b36800fe" containerName="controller-manager" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.414139 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.424019 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.424210 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.424361 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.424914 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.425272 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.426062 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.430297 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.431630 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6"] Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.446171 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b55525a2-1e36-45ed-abf5-1bec5b238eb6-client-ca\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.446243 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tf7b\" (UniqueName: \"kubernetes.io/projected/b55525a2-1e36-45ed-abf5-1bec5b238eb6-kube-api-access-4tf7b\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.446272 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55525a2-1e36-45ed-abf5-1bec5b238eb6-config\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.446293 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b55525a2-1e36-45ed-abf5-1bec5b238eb6-serving-cert\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.446330 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b55525a2-1e36-45ed-abf5-1bec5b238eb6-proxy-ca-bundles\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.547543 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b55525a2-1e36-45ed-abf5-1bec5b238eb6-client-ca\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.547623 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tf7b\" (UniqueName: \"kubernetes.io/projected/b55525a2-1e36-45ed-abf5-1bec5b238eb6-kube-api-access-4tf7b\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.547650 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55525a2-1e36-45ed-abf5-1bec5b238eb6-config\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.547685 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b55525a2-1e36-45ed-abf5-1bec5b238eb6-serving-cert\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.547721 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b55525a2-1e36-45ed-abf5-1bec5b238eb6-proxy-ca-bundles\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.548892 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b55525a2-1e36-45ed-abf5-1bec5b238eb6-client-ca\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.549114 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b55525a2-1e36-45ed-abf5-1bec5b238eb6-proxy-ca-bundles\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.549679 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b55525a2-1e36-45ed-abf5-1bec5b238eb6-config\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.555049 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b55525a2-1e36-45ed-abf5-1bec5b238eb6-serving-cert\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.565338 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tf7b\" (UniqueName: \"kubernetes.io/projected/b55525a2-1e36-45ed-abf5-1bec5b238eb6-kube-api-access-4tf7b\") pod \"controller-manager-7d5957c6c6-zb9d6\" (UID: \"b55525a2-1e36-45ed-abf5-1bec5b238eb6\") " pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.745302 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:57 crc kubenswrapper[4746]: I0103 03:20:57.974526 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6"] Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.170015 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-w4rjp"] Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.171096 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.182196 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-w4rjp"] Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.343286 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" event={"ID":"b55525a2-1e36-45ed-abf5-1bec5b238eb6","Type":"ContainerStarted","Data":"f284850a35ca05ccbbb67dbdce55a561fed63c89940a9e634a32dfbf82142aaa"} Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.343339 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" event={"ID":"b55525a2-1e36-45ed-abf5-1bec5b238eb6","Type":"ContainerStarted","Data":"f62f6d3a521d338332f2c5283efbd7aa5b4aa0bfa907514ed3da6314cd49963a"} Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.343680 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.349545 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.360299 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b46c89e7-e207-43fa-aeb1-725c11919394-registry-tls\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.360366 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cbzp\" (UniqueName: \"kubernetes.io/projected/b46c89e7-e207-43fa-aeb1-725c11919394-kube-api-access-2cbzp\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.360400 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b46c89e7-e207-43fa-aeb1-725c11919394-registry-certificates\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.360482 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b46c89e7-e207-43fa-aeb1-725c11919394-installation-pull-secrets\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.360514 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b46c89e7-e207-43fa-aeb1-725c11919394-ca-trust-extracted\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.360552 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.360589 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b46c89e7-e207-43fa-aeb1-725c11919394-trusted-ca\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.360681 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b46c89e7-e207-43fa-aeb1-725c11919394-bound-sa-token\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.410141 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d5957c6c6-zb9d6" podStartSLOduration=3.410123054 podStartE2EDuration="3.410123054s" podCreationTimestamp="2026-01-03 03:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:20:58.390111737 +0000 UTC m=+378.240002052" watchObservedRunningTime="2026-01-03 03:20:58.410123054 +0000 UTC m=+378.260013359" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.410814 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l57js"] Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.411016 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l57js" podUID="739b93d8-31f7-4ba5-861f-1e0579358067" containerName="registry-server" containerID="cri-o://6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0" gracePeriod=30 Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.421003 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nssxg"] Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.421255 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nssxg" podUID="c7e2ce03-275f-447c-bf55-f915ece6d479" containerName="registry-server" containerID="cri-o://57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938" gracePeriod=30 Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.446740 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mpsxq"] Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.447017 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" podUID="c56b0f70-ca3e-431d-88f4-d7f518b67e9c" containerName="marketplace-operator" containerID="cri-o://af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8" gracePeriod=30 Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.462144 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b46c89e7-e207-43fa-aeb1-725c11919394-trusted-ca\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.462233 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b46c89e7-e207-43fa-aeb1-725c11919394-bound-sa-token\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.462272 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b46c89e7-e207-43fa-aeb1-725c11919394-registry-tls\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.462297 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cbzp\" (UniqueName: \"kubernetes.io/projected/b46c89e7-e207-43fa-aeb1-725c11919394-kube-api-access-2cbzp\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.462322 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b46c89e7-e207-43fa-aeb1-725c11919394-registry-certificates\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.462407 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b46c89e7-e207-43fa-aeb1-725c11919394-installation-pull-secrets\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.462431 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b46c89e7-e207-43fa-aeb1-725c11919394-ca-trust-extracted\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.462874 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhm58"] Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.462934 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b46c89e7-e207-43fa-aeb1-725c11919394-ca-trust-extracted\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.463094 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vhm58" podUID="59334901-9cf4-47a8-bdd6-bd5d1567a628" containerName="registry-server" containerID="cri-o://0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b" gracePeriod=30 Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.464241 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b46c89e7-e207-43fa-aeb1-725c11919394-trusted-ca\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.466340 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b46c89e7-e207-43fa-aeb1-725c11919394-registry-certificates\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.474313 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87hhg"] Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.474531 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-87hhg" podUID="6cefe73c-d6d3-4428-af09-33abe2c70156" containerName="registry-server" containerID="cri-o://d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72" gracePeriod=30 Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.475616 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b46c89e7-e207-43fa-aeb1-725c11919394-registry-tls\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.479162 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b46c89e7-e207-43fa-aeb1-725c11919394-installation-pull-secrets\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.507632 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cbzp\" (UniqueName: \"kubernetes.io/projected/b46c89e7-e207-43fa-aeb1-725c11919394-kube-api-access-2cbzp\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.519529 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b46c89e7-e207-43fa-aeb1-725c11919394-bound-sa-token\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.520467 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t5z9t"] Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.525894 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.538767 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t5z9t"] Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.544686 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-w4rjp\" (UID: \"b46c89e7-e207-43fa-aeb1-725c11919394\") " pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.668457 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlvfz\" (UniqueName: \"kubernetes.io/projected/66739b82-1665-4781-b791-5b1fa1807d88-kube-api-access-wlvfz\") pod \"marketplace-operator-79b997595-t5z9t\" (UID: \"66739b82-1665-4781-b791-5b1fa1807d88\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.668503 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66739b82-1665-4781-b791-5b1fa1807d88-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t5z9t\" (UID: \"66739b82-1665-4781-b791-5b1fa1807d88\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.668556 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66739b82-1665-4781-b791-5b1fa1807d88-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t5z9t\" (UID: \"66739b82-1665-4781-b791-5b1fa1807d88\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.769901 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66739b82-1665-4781-b791-5b1fa1807d88-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t5z9t\" (UID: \"66739b82-1665-4781-b791-5b1fa1807d88\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.770032 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlvfz\" (UniqueName: \"kubernetes.io/projected/66739b82-1665-4781-b791-5b1fa1807d88-kube-api-access-wlvfz\") pod \"marketplace-operator-79b997595-t5z9t\" (UID: \"66739b82-1665-4781-b791-5b1fa1807d88\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.770061 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66739b82-1665-4781-b791-5b1fa1807d88-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t5z9t\" (UID: \"66739b82-1665-4781-b791-5b1fa1807d88\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.771544 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/66739b82-1665-4781-b791-5b1fa1807d88-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t5z9t\" (UID: \"66739b82-1665-4781-b791-5b1fa1807d88\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.784375 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/66739b82-1665-4781-b791-5b1fa1807d88-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t5z9t\" (UID: \"66739b82-1665-4781-b791-5b1fa1807d88\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.787789 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlvfz\" (UniqueName: \"kubernetes.io/projected/66739b82-1665-4781-b791-5b1fa1807d88-kube-api-access-wlvfz\") pod \"marketplace-operator-79b997595-t5z9t\" (UID: \"66739b82-1665-4781-b791-5b1fa1807d88\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.789610 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.876604 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.891256 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.982616 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:20:58 crc kubenswrapper[4746]: I0103 03:20:58.983168 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.015532 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.037966 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.073446 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-marketplace-trusted-ca\") pod \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\" (UID: \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.073518 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkq2n\" (UniqueName: \"kubernetes.io/projected/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-kube-api-access-vkq2n\") pod \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\" (UID: \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.073543 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-marketplace-operator-metrics\") pod \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\" (UID: \"c56b0f70-ca3e-431d-88f4-d7f518b67e9c\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.074976 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c56b0f70-ca3e-431d-88f4-d7f518b67e9c" (UID: "c56b0f70-ca3e-431d-88f4-d7f518b67e9c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.078858 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c56b0f70-ca3e-431d-88f4-d7f518b67e9c" (UID: "c56b0f70-ca3e-431d-88f4-d7f518b67e9c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.079251 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-kube-api-access-vkq2n" (OuterVolumeSpecName: "kube-api-access-vkq2n") pod "c56b0f70-ca3e-431d-88f4-d7f518b67e9c" (UID: "c56b0f70-ca3e-431d-88f4-d7f518b67e9c"). InnerVolumeSpecName "kube-api-access-vkq2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174232 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8twv\" (UniqueName: \"kubernetes.io/projected/739b93d8-31f7-4ba5-861f-1e0579358067-kube-api-access-d8twv\") pod \"739b93d8-31f7-4ba5-861f-1e0579358067\" (UID: \"739b93d8-31f7-4ba5-861f-1e0579358067\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174298 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59334901-9cf4-47a8-bdd6-bd5d1567a628-catalog-content\") pod \"59334901-9cf4-47a8-bdd6-bd5d1567a628\" (UID: \"59334901-9cf4-47a8-bdd6-bd5d1567a628\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174320 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/739b93d8-31f7-4ba5-861f-1e0579358067-catalog-content\") pod \"739b93d8-31f7-4ba5-861f-1e0579358067\" (UID: \"739b93d8-31f7-4ba5-861f-1e0579358067\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174363 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/739b93d8-31f7-4ba5-861f-1e0579358067-utilities\") pod \"739b93d8-31f7-4ba5-861f-1e0579358067\" (UID: \"739b93d8-31f7-4ba5-861f-1e0579358067\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174392 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cefe73c-d6d3-4428-af09-33abe2c70156-catalog-content\") pod \"6cefe73c-d6d3-4428-af09-33abe2c70156\" (UID: \"6cefe73c-d6d3-4428-af09-33abe2c70156\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174443 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbt6v\" (UniqueName: \"kubernetes.io/projected/59334901-9cf4-47a8-bdd6-bd5d1567a628-kube-api-access-cbt6v\") pod \"59334901-9cf4-47a8-bdd6-bd5d1567a628\" (UID: \"59334901-9cf4-47a8-bdd6-bd5d1567a628\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174465 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8rrh\" (UniqueName: \"kubernetes.io/projected/6cefe73c-d6d3-4428-af09-33abe2c70156-kube-api-access-l8rrh\") pod \"6cefe73c-d6d3-4428-af09-33abe2c70156\" (UID: \"6cefe73c-d6d3-4428-af09-33abe2c70156\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174486 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e2ce03-275f-447c-bf55-f915ece6d479-catalog-content\") pod \"c7e2ce03-275f-447c-bf55-f915ece6d479\" (UID: \"c7e2ce03-275f-447c-bf55-f915ece6d479\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174503 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e2ce03-275f-447c-bf55-f915ece6d479-utilities\") pod \"c7e2ce03-275f-447c-bf55-f915ece6d479\" (UID: \"c7e2ce03-275f-447c-bf55-f915ece6d479\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174528 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59334901-9cf4-47a8-bdd6-bd5d1567a628-utilities\") pod \"59334901-9cf4-47a8-bdd6-bd5d1567a628\" (UID: \"59334901-9cf4-47a8-bdd6-bd5d1567a628\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174544 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cefe73c-d6d3-4428-af09-33abe2c70156-utilities\") pod \"6cefe73c-d6d3-4428-af09-33abe2c70156\" (UID: \"6cefe73c-d6d3-4428-af09-33abe2c70156\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174569 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcqh7\" (UniqueName: \"kubernetes.io/projected/c7e2ce03-275f-447c-bf55-f915ece6d479-kube-api-access-wcqh7\") pod \"c7e2ce03-275f-447c-bf55-f915ece6d479\" (UID: \"c7e2ce03-275f-447c-bf55-f915ece6d479\") " Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174803 4746 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174815 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkq2n\" (UniqueName: \"kubernetes.io/projected/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-kube-api-access-vkq2n\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.174824 4746 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c56b0f70-ca3e-431d-88f4-d7f518b67e9c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.175677 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e2ce03-275f-447c-bf55-f915ece6d479-utilities" (OuterVolumeSpecName: "utilities") pod "c7e2ce03-275f-447c-bf55-f915ece6d479" (UID: "c7e2ce03-275f-447c-bf55-f915ece6d479"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.175818 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59334901-9cf4-47a8-bdd6-bd5d1567a628-utilities" (OuterVolumeSpecName: "utilities") pod "59334901-9cf4-47a8-bdd6-bd5d1567a628" (UID: "59334901-9cf4-47a8-bdd6-bd5d1567a628"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.176701 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cefe73c-d6d3-4428-af09-33abe2c70156-utilities" (OuterVolumeSpecName: "utilities") pod "6cefe73c-d6d3-4428-af09-33abe2c70156" (UID: "6cefe73c-d6d3-4428-af09-33abe2c70156"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.178157 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e2ce03-275f-447c-bf55-f915ece6d479-kube-api-access-wcqh7" (OuterVolumeSpecName: "kube-api-access-wcqh7") pod "c7e2ce03-275f-447c-bf55-f915ece6d479" (UID: "c7e2ce03-275f-447c-bf55-f915ece6d479"). InnerVolumeSpecName "kube-api-access-wcqh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.178999 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59334901-9cf4-47a8-bdd6-bd5d1567a628-kube-api-access-cbt6v" (OuterVolumeSpecName: "kube-api-access-cbt6v") pod "59334901-9cf4-47a8-bdd6-bd5d1567a628" (UID: "59334901-9cf4-47a8-bdd6-bd5d1567a628"). InnerVolumeSpecName "kube-api-access-cbt6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.179183 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cefe73c-d6d3-4428-af09-33abe2c70156-kube-api-access-l8rrh" (OuterVolumeSpecName: "kube-api-access-l8rrh") pod "6cefe73c-d6d3-4428-af09-33abe2c70156" (UID: "6cefe73c-d6d3-4428-af09-33abe2c70156"). InnerVolumeSpecName "kube-api-access-l8rrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.179309 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/739b93d8-31f7-4ba5-861f-1e0579358067-kube-api-access-d8twv" (OuterVolumeSpecName: "kube-api-access-d8twv") pod "739b93d8-31f7-4ba5-861f-1e0579358067" (UID: "739b93d8-31f7-4ba5-861f-1e0579358067"). InnerVolumeSpecName "kube-api-access-d8twv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.181676 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/739b93d8-31f7-4ba5-861f-1e0579358067-utilities" (OuterVolumeSpecName: "utilities") pod "739b93d8-31f7-4ba5-861f-1e0579358067" (UID: "739b93d8-31f7-4ba5-861f-1e0579358067"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.199074 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59334901-9cf4-47a8-bdd6-bd5d1567a628-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59334901-9cf4-47a8-bdd6-bd5d1567a628" (UID: "59334901-9cf4-47a8-bdd6-bd5d1567a628"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.239197 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7e2ce03-275f-447c-bf55-f915ece6d479-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7e2ce03-275f-447c-bf55-f915ece6d479" (UID: "c7e2ce03-275f-447c-bf55-f915ece6d479"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.241674 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/739b93d8-31f7-4ba5-861f-1e0579358067-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "739b93d8-31f7-4ba5-861f-1e0579358067" (UID: "739b93d8-31f7-4ba5-861f-1e0579358067"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.276813 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/739b93d8-31f7-4ba5-861f-1e0579358067-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.276841 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbt6v\" (UniqueName: \"kubernetes.io/projected/59334901-9cf4-47a8-bdd6-bd5d1567a628-kube-api-access-cbt6v\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.276853 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8rrh\" (UniqueName: \"kubernetes.io/projected/6cefe73c-d6d3-4428-af09-33abe2c70156-kube-api-access-l8rrh\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.276864 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7e2ce03-275f-447c-bf55-f915ece6d479-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.276877 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7e2ce03-275f-447c-bf55-f915ece6d479-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.276889 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59334901-9cf4-47a8-bdd6-bd5d1567a628-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.276900 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cefe73c-d6d3-4428-af09-33abe2c70156-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.276914 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcqh7\" (UniqueName: \"kubernetes.io/projected/c7e2ce03-275f-447c-bf55-f915ece6d479-kube-api-access-wcqh7\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.276924 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8twv\" (UniqueName: \"kubernetes.io/projected/739b93d8-31f7-4ba5-861f-1e0579358067-kube-api-access-d8twv\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.276932 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59334901-9cf4-47a8-bdd6-bd5d1567a628-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.276940 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/739b93d8-31f7-4ba5-861f-1e0579358067-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.306354 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cefe73c-d6d3-4428-af09-33abe2c70156-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cefe73c-d6d3-4428-af09-33abe2c70156" (UID: "6cefe73c-d6d3-4428-af09-33abe2c70156"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.310265 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-w4rjp"] Jan 03 03:20:59 crc kubenswrapper[4746]: W0103 03:20:59.314032 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb46c89e7_e207_43fa_aeb1_725c11919394.slice/crio-0f8b0ff426c41908e917aed82a6d1dcd97100b106e2080d2f03f81e806609c37 WatchSource:0}: Error finding container 0f8b0ff426c41908e917aed82a6d1dcd97100b106e2080d2f03f81e806609c37: Status 404 returned error can't find the container with id 0f8b0ff426c41908e917aed82a6d1dcd97100b106e2080d2f03f81e806609c37 Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.355268 4746 generic.go:334] "Generic (PLEG): container finished" podID="c56b0f70-ca3e-431d-88f4-d7f518b67e9c" containerID="af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8" exitCode=0 Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.355306 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.355334 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" event={"ID":"c56b0f70-ca3e-431d-88f4-d7f518b67e9c","Type":"ContainerDied","Data":"af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8"} Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.355372 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mpsxq" event={"ID":"c56b0f70-ca3e-431d-88f4-d7f518b67e9c","Type":"ContainerDied","Data":"19e6e50ec1a31d9eab1943d6beef1acf25b1ed45e49ed6407004ba910d928911"} Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.355393 4746 scope.go:117] "RemoveContainer" containerID="af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.356643 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t5z9t"] Jan 03 03:20:59 crc kubenswrapper[4746]: W0103 03:20:59.359017 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66739b82_1665_4781_b791_5b1fa1807d88.slice/crio-5665f5671c75ee91cdafd4c16220fa6e862579612e371901f1ab52041ddd0363 WatchSource:0}: Error finding container 5665f5671c75ee91cdafd4c16220fa6e862579612e371901f1ab52041ddd0363: Status 404 returned error can't find the container with id 5665f5671c75ee91cdafd4c16220fa6e862579612e371901f1ab52041ddd0363 Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.364633 4746 generic.go:334] "Generic (PLEG): container finished" podID="c7e2ce03-275f-447c-bf55-f915ece6d479" containerID="57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938" exitCode=0 Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.364709 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nssxg" event={"ID":"c7e2ce03-275f-447c-bf55-f915ece6d479","Type":"ContainerDied","Data":"57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938"} Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.364738 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nssxg" event={"ID":"c7e2ce03-275f-447c-bf55-f915ece6d479","Type":"ContainerDied","Data":"83306bf9e1ffb9ddabd7a8fb6654b4ba61816fd1ac619efd9212bd39253ef971"} Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.364821 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nssxg" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.369222 4746 generic.go:334] "Generic (PLEG): container finished" podID="6cefe73c-d6d3-4428-af09-33abe2c70156" containerID="d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72" exitCode=0 Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.369295 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87hhg" event={"ID":"6cefe73c-d6d3-4428-af09-33abe2c70156","Type":"ContainerDied","Data":"d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72"} Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.369320 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87hhg" event={"ID":"6cefe73c-d6d3-4428-af09-33abe2c70156","Type":"ContainerDied","Data":"478f3795183b4e0a2711dd59abe4ad5af0dafb8cefc9f7d7a6d54b7c27759d2e"} Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.369327 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87hhg" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.372233 4746 generic.go:334] "Generic (PLEG): container finished" podID="739b93d8-31f7-4ba5-861f-1e0579358067" containerID="6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0" exitCode=0 Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.372294 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l57js" event={"ID":"739b93d8-31f7-4ba5-861f-1e0579358067","Type":"ContainerDied","Data":"6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0"} Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.372321 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l57js" event={"ID":"739b93d8-31f7-4ba5-861f-1e0579358067","Type":"ContainerDied","Data":"8087ed371b3754b5cfcd5bad125e0e11baef54f52546474e9d148356ce335800"} Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.372360 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l57js" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.375311 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" event={"ID":"b46c89e7-e207-43fa-aeb1-725c11919394","Type":"ContainerStarted","Data":"0f8b0ff426c41908e917aed82a6d1dcd97100b106e2080d2f03f81e806609c37"} Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.377708 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cefe73c-d6d3-4428-af09-33abe2c70156-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.378978 4746 generic.go:334] "Generic (PLEG): container finished" podID="59334901-9cf4-47a8-bdd6-bd5d1567a628" containerID="0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b" exitCode=0 Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.379097 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vhm58" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.379289 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhm58" event={"ID":"59334901-9cf4-47a8-bdd6-bd5d1567a628","Type":"ContainerDied","Data":"0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b"} Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.379360 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vhm58" event={"ID":"59334901-9cf4-47a8-bdd6-bd5d1567a628","Type":"ContainerDied","Data":"8e0f5e4dff989411a04feed2ab61271770b96cb0e90fdb8e01597358a3399660"} Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.397690 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mpsxq"] Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.403868 4746 scope.go:117] "RemoveContainer" containerID="67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.405253 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mpsxq"] Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.408380 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nssxg"] Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.413705 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nssxg"] Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.424153 4746 scope.go:117] "RemoveContainer" containerID="af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.426246 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8\": container with ID starting with af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8 not found: ID does not exist" containerID="af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.426276 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8"} err="failed to get container status \"af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8\": rpc error: code = NotFound desc = could not find container \"af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8\": container with ID starting with af1b069f3a220f21e9d095faf60e564ec23997819b6393c2e45ae5f44fa32de8 not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.426293 4746 scope.go:117] "RemoveContainer" containerID="67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.426606 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17\": container with ID starting with 67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17 not found: ID does not exist" containerID="67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.426622 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17"} err="failed to get container status \"67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17\": rpc error: code = NotFound desc = could not find container \"67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17\": container with ID starting with 67bea8d4015dce54c84e7075dd68ac3bc73fb10463701bfc478c916b8d2ffa17 not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.426634 4746 scope.go:117] "RemoveContainer" containerID="57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.429262 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhm58"] Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.434425 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vhm58"] Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.444412 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87hhg"] Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.450861 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-87hhg"] Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.454241 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l57js"] Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.457275 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l57js"] Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.468041 4746 scope.go:117] "RemoveContainer" containerID="fb57b03baaff24634f0fc39d694a61ab55a9245edb70bec92de106f72165f70c" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.492534 4746 scope.go:117] "RemoveContainer" containerID="f729f0c23d6531894cab285df1d009d0d5b7945d820aefe0d3268b4a083304dd" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.510043 4746 scope.go:117] "RemoveContainer" containerID="57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.510739 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938\": container with ID starting with 57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938 not found: ID does not exist" containerID="57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.510788 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938"} err="failed to get container status \"57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938\": rpc error: code = NotFound desc = could not find container \"57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938\": container with ID starting with 57067cebb50758cbe39da7f4f1494f1fa5979c0fbf3d87e47ff94bdeddf5d938 not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.510817 4746 scope.go:117] "RemoveContainer" containerID="fb57b03baaff24634f0fc39d694a61ab55a9245edb70bec92de106f72165f70c" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.511501 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb57b03baaff24634f0fc39d694a61ab55a9245edb70bec92de106f72165f70c\": container with ID starting with fb57b03baaff24634f0fc39d694a61ab55a9245edb70bec92de106f72165f70c not found: ID does not exist" containerID="fb57b03baaff24634f0fc39d694a61ab55a9245edb70bec92de106f72165f70c" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.511541 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb57b03baaff24634f0fc39d694a61ab55a9245edb70bec92de106f72165f70c"} err="failed to get container status \"fb57b03baaff24634f0fc39d694a61ab55a9245edb70bec92de106f72165f70c\": rpc error: code = NotFound desc = could not find container \"fb57b03baaff24634f0fc39d694a61ab55a9245edb70bec92de106f72165f70c\": container with ID starting with fb57b03baaff24634f0fc39d694a61ab55a9245edb70bec92de106f72165f70c not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.511573 4746 scope.go:117] "RemoveContainer" containerID="f729f0c23d6531894cab285df1d009d0d5b7945d820aefe0d3268b4a083304dd" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.512341 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f729f0c23d6531894cab285df1d009d0d5b7945d820aefe0d3268b4a083304dd\": container with ID starting with f729f0c23d6531894cab285df1d009d0d5b7945d820aefe0d3268b4a083304dd not found: ID does not exist" containerID="f729f0c23d6531894cab285df1d009d0d5b7945d820aefe0d3268b4a083304dd" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.512408 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f729f0c23d6531894cab285df1d009d0d5b7945d820aefe0d3268b4a083304dd"} err="failed to get container status \"f729f0c23d6531894cab285df1d009d0d5b7945d820aefe0d3268b4a083304dd\": rpc error: code = NotFound desc = could not find container \"f729f0c23d6531894cab285df1d009d0d5b7945d820aefe0d3268b4a083304dd\": container with ID starting with f729f0c23d6531894cab285df1d009d0d5b7945d820aefe0d3268b4a083304dd not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.512444 4746 scope.go:117] "RemoveContainer" containerID="d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.526580 4746 scope.go:117] "RemoveContainer" containerID="49722885e3b051508c74228fc433a2b3fbb69d46d79217503150af361181f6c9" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.545009 4746 scope.go:117] "RemoveContainer" containerID="5e52864582898856b6051ba79065af3d7b3dc8e434f3c7dff78be191b9e11993" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.560608 4746 scope.go:117] "RemoveContainer" containerID="d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.561181 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72\": container with ID starting with d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72 not found: ID does not exist" containerID="d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.561212 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72"} err="failed to get container status \"d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72\": rpc error: code = NotFound desc = could not find container \"d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72\": container with ID starting with d8dfdf74e8b4ede5bd652315b8677836229ee74e063835aa388916cf30855b72 not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.561234 4746 scope.go:117] "RemoveContainer" containerID="49722885e3b051508c74228fc433a2b3fbb69d46d79217503150af361181f6c9" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.561486 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49722885e3b051508c74228fc433a2b3fbb69d46d79217503150af361181f6c9\": container with ID starting with 49722885e3b051508c74228fc433a2b3fbb69d46d79217503150af361181f6c9 not found: ID does not exist" containerID="49722885e3b051508c74228fc433a2b3fbb69d46d79217503150af361181f6c9" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.561508 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49722885e3b051508c74228fc433a2b3fbb69d46d79217503150af361181f6c9"} err="failed to get container status \"49722885e3b051508c74228fc433a2b3fbb69d46d79217503150af361181f6c9\": rpc error: code = NotFound desc = could not find container \"49722885e3b051508c74228fc433a2b3fbb69d46d79217503150af361181f6c9\": container with ID starting with 49722885e3b051508c74228fc433a2b3fbb69d46d79217503150af361181f6c9 not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.561543 4746 scope.go:117] "RemoveContainer" containerID="5e52864582898856b6051ba79065af3d7b3dc8e434f3c7dff78be191b9e11993" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.561769 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e52864582898856b6051ba79065af3d7b3dc8e434f3c7dff78be191b9e11993\": container with ID starting with 5e52864582898856b6051ba79065af3d7b3dc8e434f3c7dff78be191b9e11993 not found: ID does not exist" containerID="5e52864582898856b6051ba79065af3d7b3dc8e434f3c7dff78be191b9e11993" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.561792 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e52864582898856b6051ba79065af3d7b3dc8e434f3c7dff78be191b9e11993"} err="failed to get container status \"5e52864582898856b6051ba79065af3d7b3dc8e434f3c7dff78be191b9e11993\": rpc error: code = NotFound desc = could not find container \"5e52864582898856b6051ba79065af3d7b3dc8e434f3c7dff78be191b9e11993\": container with ID starting with 5e52864582898856b6051ba79065af3d7b3dc8e434f3c7dff78be191b9e11993 not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.561808 4746 scope.go:117] "RemoveContainer" containerID="6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.581034 4746 scope.go:117] "RemoveContainer" containerID="3464520285c44ff5cd1eb3bad2a107aaee464499854407215a8e949e08c59ccf" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.596196 4746 scope.go:117] "RemoveContainer" containerID="10cb2c3e010f8aee6c7877918e5554f9367940b92d5ed4e164e268c1c5439bd2" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.658328 4746 scope.go:117] "RemoveContainer" containerID="6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.658855 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0\": container with ID starting with 6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0 not found: ID does not exist" containerID="6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.658902 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0"} err="failed to get container status \"6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0\": rpc error: code = NotFound desc = could not find container \"6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0\": container with ID starting with 6892198e2530c2dc05b9964b6e241f380f6bd97b1e2486ea2e269a0beffc7ab0 not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.658933 4746 scope.go:117] "RemoveContainer" containerID="3464520285c44ff5cd1eb3bad2a107aaee464499854407215a8e949e08c59ccf" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.659263 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3464520285c44ff5cd1eb3bad2a107aaee464499854407215a8e949e08c59ccf\": container with ID starting with 3464520285c44ff5cd1eb3bad2a107aaee464499854407215a8e949e08c59ccf not found: ID does not exist" containerID="3464520285c44ff5cd1eb3bad2a107aaee464499854407215a8e949e08c59ccf" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.659300 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3464520285c44ff5cd1eb3bad2a107aaee464499854407215a8e949e08c59ccf"} err="failed to get container status \"3464520285c44ff5cd1eb3bad2a107aaee464499854407215a8e949e08c59ccf\": rpc error: code = NotFound desc = could not find container \"3464520285c44ff5cd1eb3bad2a107aaee464499854407215a8e949e08c59ccf\": container with ID starting with 3464520285c44ff5cd1eb3bad2a107aaee464499854407215a8e949e08c59ccf not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.659322 4746 scope.go:117] "RemoveContainer" containerID="10cb2c3e010f8aee6c7877918e5554f9367940b92d5ed4e164e268c1c5439bd2" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.659622 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10cb2c3e010f8aee6c7877918e5554f9367940b92d5ed4e164e268c1c5439bd2\": container with ID starting with 10cb2c3e010f8aee6c7877918e5554f9367940b92d5ed4e164e268c1c5439bd2 not found: ID does not exist" containerID="10cb2c3e010f8aee6c7877918e5554f9367940b92d5ed4e164e268c1c5439bd2" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.659674 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10cb2c3e010f8aee6c7877918e5554f9367940b92d5ed4e164e268c1c5439bd2"} err="failed to get container status \"10cb2c3e010f8aee6c7877918e5554f9367940b92d5ed4e164e268c1c5439bd2\": rpc error: code = NotFound desc = could not find container \"10cb2c3e010f8aee6c7877918e5554f9367940b92d5ed4e164e268c1c5439bd2\": container with ID starting with 10cb2c3e010f8aee6c7877918e5554f9367940b92d5ed4e164e268c1c5439bd2 not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.659700 4746 scope.go:117] "RemoveContainer" containerID="0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.687490 4746 scope.go:117] "RemoveContainer" containerID="9248cf17112f2cf3bf99e40f541bcc957149423376aa2f30d4c67d5599192e7a" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.701099 4746 scope.go:117] "RemoveContainer" containerID="32433c9bb7f5948d062d5bee23a6d7b59e6a3ba4f64de06b8b8fd70e85585ada" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.721043 4746 scope.go:117] "RemoveContainer" containerID="0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.721551 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b\": container with ID starting with 0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b not found: ID does not exist" containerID="0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.721643 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b"} err="failed to get container status \"0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b\": rpc error: code = NotFound desc = could not find container \"0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b\": container with ID starting with 0fd2962db5653331465a037d29a56bd6598c4be6f34f7542179a868e9340741b not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.722848 4746 scope.go:117] "RemoveContainer" containerID="9248cf17112f2cf3bf99e40f541bcc957149423376aa2f30d4c67d5599192e7a" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.723924 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9248cf17112f2cf3bf99e40f541bcc957149423376aa2f30d4c67d5599192e7a\": container with ID starting with 9248cf17112f2cf3bf99e40f541bcc957149423376aa2f30d4c67d5599192e7a not found: ID does not exist" containerID="9248cf17112f2cf3bf99e40f541bcc957149423376aa2f30d4c67d5599192e7a" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.723955 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9248cf17112f2cf3bf99e40f541bcc957149423376aa2f30d4c67d5599192e7a"} err="failed to get container status \"9248cf17112f2cf3bf99e40f541bcc957149423376aa2f30d4c67d5599192e7a\": rpc error: code = NotFound desc = could not find container \"9248cf17112f2cf3bf99e40f541bcc957149423376aa2f30d4c67d5599192e7a\": container with ID starting with 9248cf17112f2cf3bf99e40f541bcc957149423376aa2f30d4c67d5599192e7a not found: ID does not exist" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.723976 4746 scope.go:117] "RemoveContainer" containerID="32433c9bb7f5948d062d5bee23a6d7b59e6a3ba4f64de06b8b8fd70e85585ada" Jan 03 03:20:59 crc kubenswrapper[4746]: E0103 03:20:59.724251 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32433c9bb7f5948d062d5bee23a6d7b59e6a3ba4f64de06b8b8fd70e85585ada\": container with ID starting with 32433c9bb7f5948d062d5bee23a6d7b59e6a3ba4f64de06b8b8fd70e85585ada not found: ID does not exist" containerID="32433c9bb7f5948d062d5bee23a6d7b59e6a3ba4f64de06b8b8fd70e85585ada" Jan 03 03:20:59 crc kubenswrapper[4746]: I0103 03:20:59.724289 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32433c9bb7f5948d062d5bee23a6d7b59e6a3ba4f64de06b8b8fd70e85585ada"} err="failed to get container status \"32433c9bb7f5948d062d5bee23a6d7b59e6a3ba4f64de06b8b8fd70e85585ada\": rpc error: code = NotFound desc = could not find container \"32433c9bb7f5948d062d5bee23a6d7b59e6a3ba4f64de06b8b8fd70e85585ada\": container with ID starting with 32433c9bb7f5948d062d5bee23a6d7b59e6a3ba4f64de06b8b8fd70e85585ada not found: ID does not exist" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.384571 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" event={"ID":"66739b82-1665-4781-b791-5b1fa1807d88","Type":"ContainerStarted","Data":"c309bd042e8af567fb3133a62af22481996798ed05061b765176d62a461aa2b9"} Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.384911 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" event={"ID":"66739b82-1665-4781-b791-5b1fa1807d88","Type":"ContainerStarted","Data":"5665f5671c75ee91cdafd4c16220fa6e862579612e371901f1ab52041ddd0363"} Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.386987 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.394175 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" event={"ID":"b46c89e7-e207-43fa-aeb1-725c11919394","Type":"ContainerStarted","Data":"afafdaaff73de4d2ed49d314ef42a4a5f096f8ac94e3e37f0e3c88c0defe258a"} Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.394629 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.402541 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" podStartSLOduration=2.4025149089999998 podStartE2EDuration="2.402514909s" podCreationTimestamp="2026-01-03 03:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:21:00.400025967 +0000 UTC m=+380.249916282" watchObservedRunningTime="2026-01-03 03:21:00.402514909 +0000 UTC m=+380.252405214" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.413391 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-t5z9t" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.423717 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" podStartSLOduration=2.423698725 podStartE2EDuration="2.423698725s" podCreationTimestamp="2026-01-03 03:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:21:00.417687126 +0000 UTC m=+380.267577431" watchObservedRunningTime="2026-01-03 03:21:00.423698725 +0000 UTC m=+380.273589030" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.480254 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59334901-9cf4-47a8-bdd6-bd5d1567a628" path="/var/lib/kubelet/pods/59334901-9cf4-47a8-bdd6-bd5d1567a628/volumes" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.481026 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cefe73c-d6d3-4428-af09-33abe2c70156" path="/var/lib/kubelet/pods/6cefe73c-d6d3-4428-af09-33abe2c70156/volumes" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.481639 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="739b93d8-31f7-4ba5-861f-1e0579358067" path="/var/lib/kubelet/pods/739b93d8-31f7-4ba5-861f-1e0579358067/volumes" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.483714 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c56b0f70-ca3e-431d-88f4-d7f518b67e9c" path="/var/lib/kubelet/pods/c56b0f70-ca3e-431d-88f4-d7f518b67e9c/volumes" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.484330 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e2ce03-275f-447c-bf55-f915ece6d479" path="/var/lib/kubelet/pods/c7e2ce03-275f-447c-bf55-f915ece6d479/volumes" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634385 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w8694"] Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634573 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59334901-9cf4-47a8-bdd6-bd5d1567a628" containerName="extract-content" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634583 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="59334901-9cf4-47a8-bdd6-bd5d1567a628" containerName="extract-content" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634597 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739b93d8-31f7-4ba5-861f-1e0579358067" containerName="extract-utilities" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634603 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="739b93d8-31f7-4ba5-861f-1e0579358067" containerName="extract-utilities" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634613 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e2ce03-275f-447c-bf55-f915ece6d479" containerName="extract-content" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634619 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e2ce03-275f-447c-bf55-f915ece6d479" containerName="extract-content" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634628 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cefe73c-d6d3-4428-af09-33abe2c70156" containerName="extract-content" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634634 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cefe73c-d6d3-4428-af09-33abe2c70156" containerName="extract-content" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634642 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cefe73c-d6d3-4428-af09-33abe2c70156" containerName="registry-server" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634648 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cefe73c-d6d3-4428-af09-33abe2c70156" containerName="registry-server" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634675 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56b0f70-ca3e-431d-88f4-d7f518b67e9c" containerName="marketplace-operator" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634682 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56b0f70-ca3e-431d-88f4-d7f518b67e9c" containerName="marketplace-operator" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634689 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739b93d8-31f7-4ba5-861f-1e0579358067" containerName="extract-content" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634694 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="739b93d8-31f7-4ba5-861f-1e0579358067" containerName="extract-content" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634706 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56b0f70-ca3e-431d-88f4-d7f518b67e9c" containerName="marketplace-operator" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634712 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56b0f70-ca3e-431d-88f4-d7f518b67e9c" containerName="marketplace-operator" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634720 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59334901-9cf4-47a8-bdd6-bd5d1567a628" containerName="extract-utilities" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634726 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="59334901-9cf4-47a8-bdd6-bd5d1567a628" containerName="extract-utilities" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634733 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739b93d8-31f7-4ba5-861f-1e0579358067" containerName="registry-server" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634740 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="739b93d8-31f7-4ba5-861f-1e0579358067" containerName="registry-server" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634750 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e2ce03-275f-447c-bf55-f915ece6d479" containerName="extract-utilities" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634756 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e2ce03-275f-447c-bf55-f915ece6d479" containerName="extract-utilities" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634766 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e2ce03-275f-447c-bf55-f915ece6d479" containerName="registry-server" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634771 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e2ce03-275f-447c-bf55-f915ece6d479" containerName="registry-server" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634779 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cefe73c-d6d3-4428-af09-33abe2c70156" containerName="extract-utilities" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634786 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cefe73c-d6d3-4428-af09-33abe2c70156" containerName="extract-utilities" Jan 03 03:21:00 crc kubenswrapper[4746]: E0103 03:21:00.634793 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59334901-9cf4-47a8-bdd6-bd5d1567a628" containerName="registry-server" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634799 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="59334901-9cf4-47a8-bdd6-bd5d1567a628" containerName="registry-server" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634876 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56b0f70-ca3e-431d-88f4-d7f518b67e9c" containerName="marketplace-operator" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634886 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e2ce03-275f-447c-bf55-f915ece6d479" containerName="registry-server" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634897 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56b0f70-ca3e-431d-88f4-d7f518b67e9c" containerName="marketplace-operator" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634905 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="739b93d8-31f7-4ba5-861f-1e0579358067" containerName="registry-server" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634915 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="59334901-9cf4-47a8-bdd6-bd5d1567a628" containerName="registry-server" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.634925 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cefe73c-d6d3-4428-af09-33abe2c70156" containerName="registry-server" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.635608 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.638400 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.647693 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w8694"] Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.799102 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e33a79-0d63-4964-974b-374fa53c1113-catalog-content\") pod \"redhat-operators-w8694\" (UID: \"88e33a79-0d63-4964-974b-374fa53c1113\") " pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.799186 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e33a79-0d63-4964-974b-374fa53c1113-utilities\") pod \"redhat-operators-w8694\" (UID: \"88e33a79-0d63-4964-974b-374fa53c1113\") " pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.799226 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2zmk\" (UniqueName: \"kubernetes.io/projected/88e33a79-0d63-4964-974b-374fa53c1113-kube-api-access-l2zmk\") pod \"redhat-operators-w8694\" (UID: \"88e33a79-0d63-4964-974b-374fa53c1113\") " pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.901562 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e33a79-0d63-4964-974b-374fa53c1113-catalog-content\") pod \"redhat-operators-w8694\" (UID: \"88e33a79-0d63-4964-974b-374fa53c1113\") " pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.901634 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e33a79-0d63-4964-974b-374fa53c1113-utilities\") pod \"redhat-operators-w8694\" (UID: \"88e33a79-0d63-4964-974b-374fa53c1113\") " pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.901674 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2zmk\" (UniqueName: \"kubernetes.io/projected/88e33a79-0d63-4964-974b-374fa53c1113-kube-api-access-l2zmk\") pod \"redhat-operators-w8694\" (UID: \"88e33a79-0d63-4964-974b-374fa53c1113\") " pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.902039 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e33a79-0d63-4964-974b-374fa53c1113-catalog-content\") pod \"redhat-operators-w8694\" (UID: \"88e33a79-0d63-4964-974b-374fa53c1113\") " pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.902142 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e33a79-0d63-4964-974b-374fa53c1113-utilities\") pod \"redhat-operators-w8694\" (UID: \"88e33a79-0d63-4964-974b-374fa53c1113\") " pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.921309 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2zmk\" (UniqueName: \"kubernetes.io/projected/88e33a79-0d63-4964-974b-374fa53c1113-kube-api-access-l2zmk\") pod \"redhat-operators-w8694\" (UID: \"88e33a79-0d63-4964-974b-374fa53c1113\") " pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:00 crc kubenswrapper[4746]: I0103 03:21:00.951418 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.035615 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qtcgf"] Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.037964 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.040482 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.057698 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtcgf"] Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.206523 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ba7568-2180-4086-85d4-c66dff5b3690-utilities\") pod \"redhat-marketplace-qtcgf\" (UID: \"e8ba7568-2180-4086-85d4-c66dff5b3690\") " pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.206573 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ba7568-2180-4086-85d4-c66dff5b3690-catalog-content\") pod \"redhat-marketplace-qtcgf\" (UID: \"e8ba7568-2180-4086-85d4-c66dff5b3690\") " pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.206605 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72m8t\" (UniqueName: \"kubernetes.io/projected/e8ba7568-2180-4086-85d4-c66dff5b3690-kube-api-access-72m8t\") pod \"redhat-marketplace-qtcgf\" (UID: \"e8ba7568-2180-4086-85d4-c66dff5b3690\") " pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.307795 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ba7568-2180-4086-85d4-c66dff5b3690-catalog-content\") pod \"redhat-marketplace-qtcgf\" (UID: \"e8ba7568-2180-4086-85d4-c66dff5b3690\") " pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.307844 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72m8t\" (UniqueName: \"kubernetes.io/projected/e8ba7568-2180-4086-85d4-c66dff5b3690-kube-api-access-72m8t\") pod \"redhat-marketplace-qtcgf\" (UID: \"e8ba7568-2180-4086-85d4-c66dff5b3690\") " pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.307914 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ba7568-2180-4086-85d4-c66dff5b3690-utilities\") pod \"redhat-marketplace-qtcgf\" (UID: \"e8ba7568-2180-4086-85d4-c66dff5b3690\") " pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.308345 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8ba7568-2180-4086-85d4-c66dff5b3690-catalog-content\") pod \"redhat-marketplace-qtcgf\" (UID: \"e8ba7568-2180-4086-85d4-c66dff5b3690\") " pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.308377 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8ba7568-2180-4086-85d4-c66dff5b3690-utilities\") pod \"redhat-marketplace-qtcgf\" (UID: \"e8ba7568-2180-4086-85d4-c66dff5b3690\") " pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.327799 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72m8t\" (UniqueName: \"kubernetes.io/projected/e8ba7568-2180-4086-85d4-c66dff5b3690-kube-api-access-72m8t\") pod \"redhat-marketplace-qtcgf\" (UID: \"e8ba7568-2180-4086-85d4-c66dff5b3690\") " pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.364890 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.373747 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.373809 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.373853 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.374422 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ecdc62c66599c30509d543976f584e5ee130a84e44daf8b712c201fc9026c4d"} pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.374483 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" containerID="cri-o://2ecdc62c66599c30509d543976f584e5ee130a84e44daf8b712c201fc9026c4d" gracePeriod=600 Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.375646 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w8694"] Jan 03 03:21:01 crc kubenswrapper[4746]: W0103 03:21:01.380938 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88e33a79_0d63_4964_974b_374fa53c1113.slice/crio-f5c8db907e33d93f01e481c11ea1e0eac3fead7e56b9fac4860ebf3bd8ae659e WatchSource:0}: Error finding container f5c8db907e33d93f01e481c11ea1e0eac3fead7e56b9fac4860ebf3bd8ae659e: Status 404 returned error can't find the container with id f5c8db907e33d93f01e481c11ea1e0eac3fead7e56b9fac4860ebf3bd8ae659e Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.407385 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w8694" event={"ID":"88e33a79-0d63-4964-974b-374fa53c1113","Type":"ContainerStarted","Data":"f5c8db907e33d93f01e481c11ea1e0eac3fead7e56b9fac4860ebf3bd8ae659e"} Jan 03 03:21:01 crc kubenswrapper[4746]: I0103 03:21:01.772128 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtcgf"] Jan 03 03:21:01 crc kubenswrapper[4746]: W0103 03:21:01.783847 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8ba7568_2180_4086_85d4_c66dff5b3690.slice/crio-85757e72a29c246939e41eef5c1236df028f71166ac54ff88f606013812dbaad WatchSource:0}: Error finding container 85757e72a29c246939e41eef5c1236df028f71166ac54ff88f606013812dbaad: Status 404 returned error can't find the container with id 85757e72a29c246939e41eef5c1236df028f71166ac54ff88f606013812dbaad Jan 03 03:21:02 crc kubenswrapper[4746]: I0103 03:21:02.413948 4746 generic.go:334] "Generic (PLEG): container finished" podID="88e33a79-0d63-4964-974b-374fa53c1113" containerID="52f036ad38e438681a6347311164483473e8763ff473b362e2b462ba20575095" exitCode=0 Jan 03 03:21:02 crc kubenswrapper[4746]: I0103 03:21:02.414026 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w8694" event={"ID":"88e33a79-0d63-4964-974b-374fa53c1113","Type":"ContainerDied","Data":"52f036ad38e438681a6347311164483473e8763ff473b362e2b462ba20575095"} Jan 03 03:21:02 crc kubenswrapper[4746]: I0103 03:21:02.417255 4746 generic.go:334] "Generic (PLEG): container finished" podID="e8ba7568-2180-4086-85d4-c66dff5b3690" containerID="f963f074614cdae02527dba98edae4325dafcd7a6a50277d58c2d6cdadee3702" exitCode=0 Jan 03 03:21:02 crc kubenswrapper[4746]: I0103 03:21:02.417419 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtcgf" event={"ID":"e8ba7568-2180-4086-85d4-c66dff5b3690","Type":"ContainerDied","Data":"f963f074614cdae02527dba98edae4325dafcd7a6a50277d58c2d6cdadee3702"} Jan 03 03:21:02 crc kubenswrapper[4746]: I0103 03:21:02.417451 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtcgf" event={"ID":"e8ba7568-2180-4086-85d4-c66dff5b3690","Type":"ContainerStarted","Data":"85757e72a29c246939e41eef5c1236df028f71166ac54ff88f606013812dbaad"} Jan 03 03:21:02 crc kubenswrapper[4746]: I0103 03:21:02.420849 4746 generic.go:334] "Generic (PLEG): container finished" podID="00b3b853-9953-4039-964d-841a01708848" containerID="2ecdc62c66599c30509d543976f584e5ee130a84e44daf8b712c201fc9026c4d" exitCode=0 Jan 03 03:21:02 crc kubenswrapper[4746]: I0103 03:21:02.421411 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerDied","Data":"2ecdc62c66599c30509d543976f584e5ee130a84e44daf8b712c201fc9026c4d"} Jan 03 03:21:02 crc kubenswrapper[4746]: I0103 03:21:02.421429 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerStarted","Data":"351b52f9f234b797a950052d8e305243a4430d3a8e63c889b349db04c9738ec9"} Jan 03 03:21:02 crc kubenswrapper[4746]: I0103 03:21:02.421444 4746 scope.go:117] "RemoveContainer" containerID="87b13d723f465a2b9908be088d1df0255ae7cdf6ef557c0207ebcf95f9a54e17" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.232520 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4rbl4"] Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.234148 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.237168 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.247086 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4rbl4"] Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.335922 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv5gb\" (UniqueName: \"kubernetes.io/projected/755ed109-a3f1-48c4-8bb2-0af0f2a543cf-kube-api-access-bv5gb\") pod \"certified-operators-4rbl4\" (UID: \"755ed109-a3f1-48c4-8bb2-0af0f2a543cf\") " pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.336018 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755ed109-a3f1-48c4-8bb2-0af0f2a543cf-catalog-content\") pod \"certified-operators-4rbl4\" (UID: \"755ed109-a3f1-48c4-8bb2-0af0f2a543cf\") " pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.336073 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755ed109-a3f1-48c4-8bb2-0af0f2a543cf-utilities\") pod \"certified-operators-4rbl4\" (UID: \"755ed109-a3f1-48c4-8bb2-0af0f2a543cf\") " pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.432919 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w8694" event={"ID":"88e33a79-0d63-4964-974b-374fa53c1113","Type":"ContainerStarted","Data":"d84dedd4c6257a56adcdbcdae7bdc4bb856376b07c2198505aacf987647711be"} Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.437854 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755ed109-a3f1-48c4-8bb2-0af0f2a543cf-catalog-content\") pod \"certified-operators-4rbl4\" (UID: \"755ed109-a3f1-48c4-8bb2-0af0f2a543cf\") " pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.437924 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755ed109-a3f1-48c4-8bb2-0af0f2a543cf-utilities\") pod \"certified-operators-4rbl4\" (UID: \"755ed109-a3f1-48c4-8bb2-0af0f2a543cf\") " pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.437970 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv5gb\" (UniqueName: \"kubernetes.io/projected/755ed109-a3f1-48c4-8bb2-0af0f2a543cf-kube-api-access-bv5gb\") pod \"certified-operators-4rbl4\" (UID: \"755ed109-a3f1-48c4-8bb2-0af0f2a543cf\") " pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.438620 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mzqgw"] Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.438734 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/755ed109-a3f1-48c4-8bb2-0af0f2a543cf-catalog-content\") pod \"certified-operators-4rbl4\" (UID: \"755ed109-a3f1-48c4-8bb2-0af0f2a543cf\") " pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.439384 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/755ed109-a3f1-48c4-8bb2-0af0f2a543cf-utilities\") pod \"certified-operators-4rbl4\" (UID: \"755ed109-a3f1-48c4-8bb2-0af0f2a543cf\") " pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.439605 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.441634 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.442257 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtcgf" event={"ID":"e8ba7568-2180-4086-85d4-c66dff5b3690","Type":"ContainerStarted","Data":"d6b9241ee36980124801d32a025ab195aed9842958b99b3134bb5251fdb5f34d"} Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.456562 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mzqgw"] Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.465406 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv5gb\" (UniqueName: \"kubernetes.io/projected/755ed109-a3f1-48c4-8bb2-0af0f2a543cf-kube-api-access-bv5gb\") pod \"certified-operators-4rbl4\" (UID: \"755ed109-a3f1-48c4-8bb2-0af0f2a543cf\") " pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.539050 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a35bb44-d6aa-4a48-81b8-feacb81c8dbc-utilities\") pod \"community-operators-mzqgw\" (UID: \"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc\") " pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.539127 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a35bb44-d6aa-4a48-81b8-feacb81c8dbc-catalog-content\") pod \"community-operators-mzqgw\" (UID: \"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc\") " pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.539451 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmwzl\" (UniqueName: \"kubernetes.io/projected/9a35bb44-d6aa-4a48-81b8-feacb81c8dbc-kube-api-access-wmwzl\") pod \"community-operators-mzqgw\" (UID: \"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc\") " pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.547911 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.641316 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a35bb44-d6aa-4a48-81b8-feacb81c8dbc-catalog-content\") pod \"community-operators-mzqgw\" (UID: \"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc\") " pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.644399 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmwzl\" (UniqueName: \"kubernetes.io/projected/9a35bb44-d6aa-4a48-81b8-feacb81c8dbc-kube-api-access-wmwzl\") pod \"community-operators-mzqgw\" (UID: \"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc\") " pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.644481 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a35bb44-d6aa-4a48-81b8-feacb81c8dbc-utilities\") pod \"community-operators-mzqgw\" (UID: \"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc\") " pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.644708 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a35bb44-d6aa-4a48-81b8-feacb81c8dbc-catalog-content\") pod \"community-operators-mzqgw\" (UID: \"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc\") " pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.646115 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a35bb44-d6aa-4a48-81b8-feacb81c8dbc-utilities\") pod \"community-operators-mzqgw\" (UID: \"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc\") " pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.662823 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmwzl\" (UniqueName: \"kubernetes.io/projected/9a35bb44-d6aa-4a48-81b8-feacb81c8dbc-kube-api-access-wmwzl\") pod \"community-operators-mzqgw\" (UID: \"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc\") " pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.888951 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:03 crc kubenswrapper[4746]: I0103 03:21:03.931952 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4rbl4"] Jan 03 03:21:03 crc kubenswrapper[4746]: W0103 03:21:03.938824 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod755ed109_a3f1_48c4_8bb2_0af0f2a543cf.slice/crio-7222ca7997f7dd57d561f40b2e3c1b051d76f71dc223b572276a449886ca3d46 WatchSource:0}: Error finding container 7222ca7997f7dd57d561f40b2e3c1b051d76f71dc223b572276a449886ca3d46: Status 404 returned error can't find the container with id 7222ca7997f7dd57d561f40b2e3c1b051d76f71dc223b572276a449886ca3d46 Jan 03 03:21:04 crc kubenswrapper[4746]: I0103 03:21:04.284000 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mzqgw"] Jan 03 03:21:04 crc kubenswrapper[4746]: I0103 03:21:04.447720 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzqgw" event={"ID":"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc","Type":"ContainerStarted","Data":"cf1250c763c469ba958d639780ace77bd850008e92d9799bb98311792ddca5ba"} Jan 03 03:21:04 crc kubenswrapper[4746]: I0103 03:21:04.448028 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzqgw" event={"ID":"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc","Type":"ContainerStarted","Data":"159a25f8b0d6c2d52fd2ca9b0b25d486cc44509079c0716ef818d4263458301c"} Jan 03 03:21:04 crc kubenswrapper[4746]: I0103 03:21:04.455078 4746 generic.go:334] "Generic (PLEG): container finished" podID="88e33a79-0d63-4964-974b-374fa53c1113" containerID="d84dedd4c6257a56adcdbcdae7bdc4bb856376b07c2198505aacf987647711be" exitCode=0 Jan 03 03:21:04 crc kubenswrapper[4746]: I0103 03:21:04.455143 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w8694" event={"ID":"88e33a79-0d63-4964-974b-374fa53c1113","Type":"ContainerDied","Data":"d84dedd4c6257a56adcdbcdae7bdc4bb856376b07c2198505aacf987647711be"} Jan 03 03:21:04 crc kubenswrapper[4746]: I0103 03:21:04.457022 4746 generic.go:334] "Generic (PLEG): container finished" podID="e8ba7568-2180-4086-85d4-c66dff5b3690" containerID="d6b9241ee36980124801d32a025ab195aed9842958b99b3134bb5251fdb5f34d" exitCode=0 Jan 03 03:21:04 crc kubenswrapper[4746]: I0103 03:21:04.457075 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtcgf" event={"ID":"e8ba7568-2180-4086-85d4-c66dff5b3690","Type":"ContainerDied","Data":"d6b9241ee36980124801d32a025ab195aed9842958b99b3134bb5251fdb5f34d"} Jan 03 03:21:04 crc kubenswrapper[4746]: I0103 03:21:04.460474 4746 generic.go:334] "Generic (PLEG): container finished" podID="755ed109-a3f1-48c4-8bb2-0af0f2a543cf" containerID="5fc5dc63ad375aa4e8a4d3bf658abe00794c1a22040f6ff87d3ff3b843ec9c4b" exitCode=0 Jan 03 03:21:04 crc kubenswrapper[4746]: I0103 03:21:04.460500 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rbl4" event={"ID":"755ed109-a3f1-48c4-8bb2-0af0f2a543cf","Type":"ContainerDied","Data":"5fc5dc63ad375aa4e8a4d3bf658abe00794c1a22040f6ff87d3ff3b843ec9c4b"} Jan 03 03:21:04 crc kubenswrapper[4746]: I0103 03:21:04.460518 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rbl4" event={"ID":"755ed109-a3f1-48c4-8bb2-0af0f2a543cf","Type":"ContainerStarted","Data":"7222ca7997f7dd57d561f40b2e3c1b051d76f71dc223b572276a449886ca3d46"} Jan 03 03:21:05 crc kubenswrapper[4746]: I0103 03:21:05.467097 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtcgf" event={"ID":"e8ba7568-2180-4086-85d4-c66dff5b3690","Type":"ContainerStarted","Data":"a7d1c3dce1e4b770c85d9adede2b9896580c219a9a088e4c35af9c4c25545fda"} Jan 03 03:21:05 crc kubenswrapper[4746]: I0103 03:21:05.469276 4746 generic.go:334] "Generic (PLEG): container finished" podID="755ed109-a3f1-48c4-8bb2-0af0f2a543cf" containerID="84f7aa0b98f07f22819e6d337cff778f8cdb693e62de8d60c2dd486a18b9123c" exitCode=0 Jan 03 03:21:05 crc kubenswrapper[4746]: I0103 03:21:05.469334 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rbl4" event={"ID":"755ed109-a3f1-48c4-8bb2-0af0f2a543cf","Type":"ContainerDied","Data":"84f7aa0b98f07f22819e6d337cff778f8cdb693e62de8d60c2dd486a18b9123c"} Jan 03 03:21:05 crc kubenswrapper[4746]: I0103 03:21:05.483187 4746 generic.go:334] "Generic (PLEG): container finished" podID="9a35bb44-d6aa-4a48-81b8-feacb81c8dbc" containerID="cf1250c763c469ba958d639780ace77bd850008e92d9799bb98311792ddca5ba" exitCode=0 Jan 03 03:21:05 crc kubenswrapper[4746]: I0103 03:21:05.483289 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzqgw" event={"ID":"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc","Type":"ContainerDied","Data":"cf1250c763c469ba958d639780ace77bd850008e92d9799bb98311792ddca5ba"} Jan 03 03:21:05 crc kubenswrapper[4746]: I0103 03:21:05.487600 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w8694" event={"ID":"88e33a79-0d63-4964-974b-374fa53c1113","Type":"ContainerStarted","Data":"b42cc84f6cd346271ef6f2dca172931f64eb31e99f13ecc16b0948cb2f3eecc1"} Jan 03 03:21:05 crc kubenswrapper[4746]: I0103 03:21:05.499904 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qtcgf" podStartSLOduration=1.879646017 podStartE2EDuration="4.49989273s" podCreationTimestamp="2026-01-03 03:21:01 +0000 UTC" firstStartedPulling="2026-01-03 03:21:02.419061745 +0000 UTC m=+382.268952050" lastFinishedPulling="2026-01-03 03:21:05.039308458 +0000 UTC m=+384.889198763" observedRunningTime="2026-01-03 03:21:05.498149636 +0000 UTC m=+385.348039951" watchObservedRunningTime="2026-01-03 03:21:05.49989273 +0000 UTC m=+385.349783035" Jan 03 03:21:05 crc kubenswrapper[4746]: I0103 03:21:05.515930 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w8694" podStartSLOduration=2.745361762 podStartE2EDuration="5.515915178s" podCreationTimestamp="2026-01-03 03:21:00 +0000 UTC" firstStartedPulling="2026-01-03 03:21:02.416227035 +0000 UTC m=+382.266117360" lastFinishedPulling="2026-01-03 03:21:05.186780451 +0000 UTC m=+385.036670776" observedRunningTime="2026-01-03 03:21:05.513790305 +0000 UTC m=+385.363680610" watchObservedRunningTime="2026-01-03 03:21:05.515915178 +0000 UTC m=+385.365805483" Jan 03 03:21:06 crc kubenswrapper[4746]: I0103 03:21:06.495852 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rbl4" event={"ID":"755ed109-a3f1-48c4-8bb2-0af0f2a543cf","Type":"ContainerStarted","Data":"f5d9482578cd06e9d2023ddb5e05da7f4fac8bae05ec4e5a701ee860a1229290"} Jan 03 03:21:06 crc kubenswrapper[4746]: I0103 03:21:06.497545 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzqgw" event={"ID":"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc","Type":"ContainerStarted","Data":"9867cf4006fd6c58d514fcd360bdea19d1bb347513c5611dc38aa4f3f67d9dc9"} Jan 03 03:21:06 crc kubenswrapper[4746]: I0103 03:21:06.518509 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4rbl4" podStartSLOduration=1.98629104 podStartE2EDuration="3.518493954s" podCreationTimestamp="2026-01-03 03:21:03 +0000 UTC" firstStartedPulling="2026-01-03 03:21:04.461530944 +0000 UTC m=+384.311421249" lastFinishedPulling="2026-01-03 03:21:05.993733858 +0000 UTC m=+385.843624163" observedRunningTime="2026-01-03 03:21:06.517819208 +0000 UTC m=+386.367709523" watchObservedRunningTime="2026-01-03 03:21:06.518493954 +0000 UTC m=+386.368384259" Jan 03 03:21:07 crc kubenswrapper[4746]: I0103 03:21:07.504480 4746 generic.go:334] "Generic (PLEG): container finished" podID="9a35bb44-d6aa-4a48-81b8-feacb81c8dbc" containerID="9867cf4006fd6c58d514fcd360bdea19d1bb347513c5611dc38aa4f3f67d9dc9" exitCode=0 Jan 03 03:21:07 crc kubenswrapper[4746]: I0103 03:21:07.504712 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzqgw" event={"ID":"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc","Type":"ContainerDied","Data":"9867cf4006fd6c58d514fcd360bdea19d1bb347513c5611dc38aa4f3f67d9dc9"} Jan 03 03:21:09 crc kubenswrapper[4746]: I0103 03:21:09.560814 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mzqgw" event={"ID":"9a35bb44-d6aa-4a48-81b8-feacb81c8dbc","Type":"ContainerStarted","Data":"b0f891f8cf7859f6f6c35dc6763f17ff1dec4f0972361088cf96a2da8581252f"} Jan 03 03:21:09 crc kubenswrapper[4746]: I0103 03:21:09.586544 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mzqgw" podStartSLOduration=4.137129373 podStartE2EDuration="6.586522061s" podCreationTimestamp="2026-01-03 03:21:03 +0000 UTC" firstStartedPulling="2026-01-03 03:21:05.484893007 +0000 UTC m=+385.334783312" lastFinishedPulling="2026-01-03 03:21:07.934285695 +0000 UTC m=+387.784176000" observedRunningTime="2026-01-03 03:21:09.580481651 +0000 UTC m=+389.430371966" watchObservedRunningTime="2026-01-03 03:21:09.586522061 +0000 UTC m=+389.436412366" Jan 03 03:21:10 crc kubenswrapper[4746]: I0103 03:21:10.951864 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:10 crc kubenswrapper[4746]: I0103 03:21:10.952407 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:10 crc kubenswrapper[4746]: I0103 03:21:10.995306 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:11 crc kubenswrapper[4746]: I0103 03:21:11.365129 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:11 crc kubenswrapper[4746]: I0103 03:21:11.365193 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:11 crc kubenswrapper[4746]: I0103 03:21:11.399796 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:11 crc kubenswrapper[4746]: I0103 03:21:11.606988 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qtcgf" Jan 03 03:21:11 crc kubenswrapper[4746]: I0103 03:21:11.619990 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w8694" Jan 03 03:21:13 crc kubenswrapper[4746]: I0103 03:21:13.549084 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:13 crc kubenswrapper[4746]: I0103 03:21:13.549446 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:13 crc kubenswrapper[4746]: I0103 03:21:13.639586 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:13 crc kubenswrapper[4746]: I0103 03:21:13.674047 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4rbl4" Jan 03 03:21:13 crc kubenswrapper[4746]: I0103 03:21:13.890224 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:13 crc kubenswrapper[4746]: I0103 03:21:13.890307 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:13 crc kubenswrapper[4746]: I0103 03:21:13.935144 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:14 crc kubenswrapper[4746]: I0103 03:21:14.642935 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mzqgw" Jan 03 03:21:18 crc kubenswrapper[4746]: I0103 03:21:18.794540 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-w4rjp" Jan 03 03:21:18 crc kubenswrapper[4746]: I0103 03:21:18.843219 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ndqm2"] Jan 03 03:21:43 crc kubenswrapper[4746]: I0103 03:21:43.890176 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" podUID="d3da68b1-7a82-4adc-81ae-d9edc00d3c32" containerName="registry" containerID="cri-o://6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a" gracePeriod=30 Jan 03 03:21:43 crc kubenswrapper[4746]: I0103 03:21:43.976855 4746 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-ndqm2 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.21:5000/healthz\": dial tcp 10.217.0.21:5000: connect: connection refused" start-of-body= Jan 03 03:21:43 crc kubenswrapper[4746]: I0103 03:21:43.977027 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" podUID="d3da68b1-7a82-4adc-81ae-d9edc00d3c32" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.21:5000/healthz\": dial tcp 10.217.0.21:5000: connect: connection refused" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.318005 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.488502 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.488573 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-installation-pull-secrets\") pod \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.488613 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-bound-sa-token\") pod \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.488727 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-trusted-ca\") pod \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.488765 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-registry-tls\") pod \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.488791 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-ca-trust-extracted\") pod \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.488806 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kbcq\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-kube-api-access-9kbcq\") pod \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.488829 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-registry-certificates\") pod \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\" (UID: \"d3da68b1-7a82-4adc-81ae-d9edc00d3c32\") " Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.489781 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d3da68b1-7a82-4adc-81ae-d9edc00d3c32" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.494444 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d3da68b1-7a82-4adc-81ae-d9edc00d3c32" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.495262 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d3da68b1-7a82-4adc-81ae-d9edc00d3c32" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.499165 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d3da68b1-7a82-4adc-81ae-d9edc00d3c32" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.502379 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d3da68b1-7a82-4adc-81ae-d9edc00d3c32" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.505497 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-kube-api-access-9kbcq" (OuterVolumeSpecName: "kube-api-access-9kbcq") pod "d3da68b1-7a82-4adc-81ae-d9edc00d3c32" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32"). InnerVolumeSpecName "kube-api-access-9kbcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.505614 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d3da68b1-7a82-4adc-81ae-d9edc00d3c32" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.505944 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d3da68b1-7a82-4adc-81ae-d9edc00d3c32" (UID: "d3da68b1-7a82-4adc-81ae-d9edc00d3c32"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.590587 4746 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.591107 4746 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.591179 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kbcq\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-kube-api-access-9kbcq\") on node \"crc\" DevicePath \"\"" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.591247 4746 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.591313 4746 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.591373 4746 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.591438 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3da68b1-7a82-4adc-81ae-d9edc00d3c32-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.811172 4746 generic.go:334] "Generic (PLEG): container finished" podID="d3da68b1-7a82-4adc-81ae-d9edc00d3c32" containerID="6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a" exitCode=0 Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.811242 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" event={"ID":"d3da68b1-7a82-4adc-81ae-d9edc00d3c32","Type":"ContainerDied","Data":"6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a"} Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.811285 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.811310 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ndqm2" event={"ID":"d3da68b1-7a82-4adc-81ae-d9edc00d3c32","Type":"ContainerDied","Data":"289c38a178189ee1bdeaa9655e51cd66eb43947931e659074965dd391273cb6f"} Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.811335 4746 scope.go:117] "RemoveContainer" containerID="6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.849489 4746 scope.go:117] "RemoveContainer" containerID="6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a" Jan 03 03:21:44 crc kubenswrapper[4746]: E0103 03:21:44.850100 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a\": container with ID starting with 6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a not found: ID does not exist" containerID="6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.850146 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a"} err="failed to get container status \"6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a\": rpc error: code = NotFound desc = could not find container \"6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a\": container with ID starting with 6a6b0d795a293d8f8944a3831096cdbc0979462841e8a35fb68d710bb8fd533a not found: ID does not exist" Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.881720 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ndqm2"] Jan 03 03:21:44 crc kubenswrapper[4746]: I0103 03:21:44.887707 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ndqm2"] Jan 03 03:21:46 crc kubenswrapper[4746]: I0103 03:21:46.476693 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3da68b1-7a82-4adc-81ae-d9edc00d3c32" path="/var/lib/kubelet/pods/d3da68b1-7a82-4adc-81ae-d9edc00d3c32/volumes" Jan 03 03:23:01 crc kubenswrapper[4746]: I0103 03:23:01.373563 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:23:01 crc kubenswrapper[4746]: I0103 03:23:01.374732 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:23:31 crc kubenswrapper[4746]: I0103 03:23:31.373889 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:23:31 crc kubenswrapper[4746]: I0103 03:23:31.374466 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:24:01 crc kubenswrapper[4746]: I0103 03:24:01.373184 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:24:01 crc kubenswrapper[4746]: I0103 03:24:01.374428 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:24:01 crc kubenswrapper[4746]: I0103 03:24:01.374532 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:24:01 crc kubenswrapper[4746]: I0103 03:24:01.375049 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"351b52f9f234b797a950052d8e305243a4430d3a8e63c889b349db04c9738ec9"} pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 03:24:01 crc kubenswrapper[4746]: I0103 03:24:01.375163 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" containerID="cri-o://351b52f9f234b797a950052d8e305243a4430d3a8e63c889b349db04c9738ec9" gracePeriod=600 Jan 03 03:24:02 crc kubenswrapper[4746]: I0103 03:24:02.204999 4746 generic.go:334] "Generic (PLEG): container finished" podID="00b3b853-9953-4039-964d-841a01708848" containerID="351b52f9f234b797a950052d8e305243a4430d3a8e63c889b349db04c9738ec9" exitCode=0 Jan 03 03:24:02 crc kubenswrapper[4746]: I0103 03:24:02.205097 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerDied","Data":"351b52f9f234b797a950052d8e305243a4430d3a8e63c889b349db04c9738ec9"} Jan 03 03:24:02 crc kubenswrapper[4746]: I0103 03:24:02.205192 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerStarted","Data":"4e73d799a311783ed2ed25907dcb1be6ade63e15caa315b94224accb77b9a4df"} Jan 03 03:24:02 crc kubenswrapper[4746]: I0103 03:24:02.205238 4746 scope.go:117] "RemoveContainer" containerID="2ecdc62c66599c30509d543976f584e5ee130a84e44daf8b712c201fc9026c4d" Jan 03 03:26:01 crc kubenswrapper[4746]: I0103 03:26:01.373112 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:26:01 crc kubenswrapper[4746]: I0103 03:26:01.373993 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:26:31 crc kubenswrapper[4746]: I0103 03:26:31.373183 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:26:31 crc kubenswrapper[4746]: I0103 03:26:31.373735 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.360727 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rzrbx"] Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.361192 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovn-controller" containerID="cri-o://63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d" gracePeriod=30 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.361230 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="nbdb" containerID="cri-o://1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77" gracePeriod=30 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.361367 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="northd" containerID="cri-o://76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba" gracePeriod=30 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.361455 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a" gracePeriod=30 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.361534 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="kube-rbac-proxy-node" containerID="cri-o://7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b" gracePeriod=30 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.361602 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovn-acl-logging" containerID="cri-o://ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875" gracePeriod=30 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.361747 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="sbdb" containerID="cri-o://c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f" gracePeriod=30 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.400033 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" containerID="cri-o://64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11" gracePeriod=30 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.635303 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/3.log" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.637161 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovn-acl-logging/0.log" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.637675 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovn-controller/0.log" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.638090 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.699704 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mvbbf"] Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700044 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="northd" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700061 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="northd" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700075 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="nbdb" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700082 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="nbdb" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700097 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700103 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700111 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700116 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700129 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="kubecfg-setup" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700135 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="kubecfg-setup" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700147 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700154 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700168 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700175 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700186 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3da68b1-7a82-4adc-81ae-d9edc00d3c32" containerName="registry" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700192 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3da68b1-7a82-4adc-81ae-d9edc00d3c32" containerName="registry" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700203 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovn-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700209 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovn-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700221 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="kube-rbac-proxy-ovn-metrics" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700228 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="kube-rbac-proxy-ovn-metrics" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700242 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700248 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700263 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="kube-rbac-proxy-node" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700270 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="kube-rbac-proxy-node" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700276 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="sbdb" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700282 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="sbdb" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.700294 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovn-acl-logging" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.700300 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovn-acl-logging" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.701695 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.701715 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3da68b1-7a82-4adc-81ae-d9edc00d3c32" containerName="registry" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.701749 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="sbdb" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.701757 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="nbdb" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.701770 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="kube-rbac-proxy-node" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.701777 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.701791 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="northd" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.701802 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="kube-rbac-proxy-ovn-metrics" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.701816 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovn-acl-logging" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.701823 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.701829 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovn-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.701840 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.702507 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerName="ovnkube-controller" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.709083 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-plg55_7938adea-5f3a-4bfa-8776-f8b06ce7219e/kube-multus/2.log" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.709226 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.709998 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-plg55_7938adea-5f3a-4bfa-8776-f8b06ce7219e/kube-multus/1.log" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.710055 4746 generic.go:334] "Generic (PLEG): container finished" podID="7938adea-5f3a-4bfa-8776-f8b06ce7219e" containerID="54f9bfe29db937bd01a081ab29a78fa38cfa432fc695ab275c1daf35535f1a60" exitCode=2 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.710154 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-plg55" event={"ID":"7938adea-5f3a-4bfa-8776-f8b06ce7219e","Type":"ContainerDied","Data":"54f9bfe29db937bd01a081ab29a78fa38cfa432fc695ab275c1daf35535f1a60"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.710214 4746 scope.go:117] "RemoveContainer" containerID="46e2ae31a6a5d3d62f679481e4519a93bc6a2db3132b705e0daf37d19e1cad93" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.711036 4746 scope.go:117] "RemoveContainer" containerID="54f9bfe29db937bd01a081ab29a78fa38cfa432fc695ab275c1daf35535f1a60" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.711351 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-plg55_openshift-multus(7938adea-5f3a-4bfa-8776-f8b06ce7219e)\"" pod="openshift-multus/multus-plg55" podUID="7938adea-5f3a-4bfa-8776-f8b06ce7219e" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714499 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-openvswitch\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714563 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-var-lib-openvswitch\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714585 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-systemd\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714601 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714614 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714650 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714611 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-cni-netd\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714758 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-ovn\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714779 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-cni-bin\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714809 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-log-socket\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714834 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714871 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-kubelet\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714867 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714898 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovnkube-script-lib\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714918 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-run-netns\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714899 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714946 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-run-ovn-kubernetes\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714951 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714964 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovnkube-config\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714979 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-log-socket" (OuterVolumeSpecName: "log-socket") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714983 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-slash\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.714999 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715002 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhbjr\" (UniqueName: \"kubernetes.io/projected/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-kube-api-access-mhbjr\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715050 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-env-overrides\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715072 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovn-node-metrics-cert\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715091 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-node-log\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715108 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-systemd-units\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715143 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-etc-openvswitch\") pod \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\" (UID: \"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff\") " Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715230 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-etc-openvswitch\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715249 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-run-netns\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715296 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-cni-netd\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715313 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-run-systemd\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715332 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5453953a-20f7-4eba-8ab0-21328362c3c3-ovnkube-script-lib\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715352 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5453953a-20f7-4eba-8ab0-21328362c3c3-ovn-node-metrics-cert\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715379 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-run-openvswitch\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715394 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715415 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-slash\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715431 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-cni-bin\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715449 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsc4l\" (UniqueName: \"kubernetes.io/projected/5453953a-20f7-4eba-8ab0-21328362c3c3-kube-api-access-zsc4l\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715500 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715524 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5453953a-20f7-4eba-8ab0-21328362c3c3-ovnkube-config\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715543 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-var-lib-openvswitch\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715560 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-run-ovn\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715575 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-node-log\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715592 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-kubelet\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715609 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-log-socket\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715625 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-systemd-units\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715650 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5453953a-20f7-4eba-8ab0-21328362c3c3-env-overrides\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715699 4746 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715709 4746 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715719 4746 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715729 4746 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715737 4746 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-log-socket\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715746 4746 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715756 4746 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715764 4746 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.715822 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.716285 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.716634 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.716800 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovnkube-controller/3.log" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.716807 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.716823 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-node-log" (OuterVolumeSpecName: "node-log") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.716838 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.717525 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.717580 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.717608 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-slash" (OuterVolumeSpecName: "host-slash") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.718366 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovn-acl-logging/0.log" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.718717 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-rzrbx_a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/ovn-controller/0.log" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.718958 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11" exitCode=0 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.718981 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f" exitCode=0 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.718988 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77" exitCode=0 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.718996 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba" exitCode=0 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719004 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a" exitCode=0 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719010 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b" exitCode=0 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719017 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875" exitCode=143 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719024 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" containerID="63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d" exitCode=143 Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719041 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719064 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719080 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719089 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719137 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719153 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719164 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719175 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719180 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719185 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719191 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719196 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719202 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719206 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719212 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719217 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719219 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719224 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719319 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719327 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719332 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719338 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719343 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719349 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719354 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719359 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719364 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719369 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719377 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719386 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719394 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719400 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719409 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719414 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719419 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719424 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719429 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719435 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719440 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719447 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rzrbx" event={"ID":"a9a29410-e9d4-4c5a-98cb-e2c56b9170ff","Type":"ContainerDied","Data":"a0f3d7f19faa8de934b20d119d57c5986fa5282c05e4a7cb5ba7d1b2a599e960"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719455 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719461 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719466 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719472 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719477 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719482 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719487 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719491 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719496 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719501 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350"} Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.719918 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-kube-api-access-mhbjr" (OuterVolumeSpecName: "kube-api-access-mhbjr") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "kube-api-access-mhbjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.720339 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.739722 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" (UID: "a9a29410-e9d4-4c5a-98cb-e2c56b9170ff"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.777964 4746 scope.go:117] "RemoveContainer" containerID="64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.790849 4746 scope.go:117] "RemoveContainer" containerID="73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.803314 4746 scope.go:117] "RemoveContainer" containerID="c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.814932 4746 scope.go:117] "RemoveContainer" containerID="1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816054 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-cni-netd\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816116 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-run-systemd\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816139 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5453953a-20f7-4eba-8ab0-21328362c3c3-ovnkube-script-lib\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816180 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-cni-netd\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816190 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5453953a-20f7-4eba-8ab0-21328362c3c3-ovn-node-metrics-cert\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816209 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-run-openvswitch\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816215 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-run-systemd\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816249 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816259 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-run-openvswitch\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816272 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-slash\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816288 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-cni-bin\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816298 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816323 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsc4l\" (UniqueName: \"kubernetes.io/projected/5453953a-20f7-4eba-8ab0-21328362c3c3-kube-api-access-zsc4l\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816332 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-cni-bin\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816342 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816364 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5453953a-20f7-4eba-8ab0-21328362c3c3-ovnkube-config\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816378 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-slash\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816411 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-var-lib-openvswitch\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816444 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-var-lib-openvswitch\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816485 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-run-ovn\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816520 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-node-log\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816545 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-kubelet\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816570 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-log-socket\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816598 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-systemd-units\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816648 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5453953a-20f7-4eba-8ab0-21328362c3c3-env-overrides\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816683 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-node-log\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816690 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-etc-openvswitch\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816720 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-run-ovn\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816721 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-run-netns\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.816744 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-run-netns\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817059 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5453953a-20f7-4eba-8ab0-21328362c3c3-ovnkube-config\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817098 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817122 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-etc-openvswitch\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817177 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5453953a-20f7-4eba-8ab0-21328362c3c3-env-overrides\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817195 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-host-kubelet\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817194 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5453953a-20f7-4eba-8ab0-21328362c3c3-ovnkube-script-lib\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817185 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-systemd-units\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817217 4746 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817240 4746 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817249 4746 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817258 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817267 4746 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817276 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817284 4746 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-host-slash\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817292 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhbjr\" (UniqueName: \"kubernetes.io/projected/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-kube-api-access-mhbjr\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817301 4746 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817308 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817316 4746 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-node-log\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817326 4746 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.817855 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5453953a-20f7-4eba-8ab0-21328362c3c3-log-socket\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.819068 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5453953a-20f7-4eba-8ab0-21328362c3c3-ovn-node-metrics-cert\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.847345 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsc4l\" (UniqueName: \"kubernetes.io/projected/5453953a-20f7-4eba-8ab0-21328362c3c3-kube-api-access-zsc4l\") pod \"ovnkube-node-mvbbf\" (UID: \"5453953a-20f7-4eba-8ab0-21328362c3c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.851514 4746 scope.go:117] "RemoveContainer" containerID="76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.899292 4746 scope.go:117] "RemoveContainer" containerID="4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.911340 4746 scope.go:117] "RemoveContainer" containerID="7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.922696 4746 scope.go:117] "RemoveContainer" containerID="ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.933718 4746 scope.go:117] "RemoveContainer" containerID="63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.945419 4746 scope.go:117] "RemoveContainer" containerID="fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.955636 4746 scope.go:117] "RemoveContainer" containerID="64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.955908 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11\": container with ID starting with 64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11 not found: ID does not exist" containerID="64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.955938 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11"} err="failed to get container status \"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11\": rpc error: code = NotFound desc = could not find container \"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11\": container with ID starting with 64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.955963 4746 scope.go:117] "RemoveContainer" containerID="73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.956415 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\": container with ID starting with 73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add not found: ID does not exist" containerID="73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.956438 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add"} err="failed to get container status \"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\": rpc error: code = NotFound desc = could not find container \"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\": container with ID starting with 73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.956451 4746 scope.go:117] "RemoveContainer" containerID="c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.956868 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\": container with ID starting with c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f not found: ID does not exist" containerID="c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.956911 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f"} err="failed to get container status \"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\": rpc error: code = NotFound desc = could not find container \"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\": container with ID starting with c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.956937 4746 scope.go:117] "RemoveContainer" containerID="1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.959387 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\": container with ID starting with 1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77 not found: ID does not exist" containerID="1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.959420 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77"} err="failed to get container status \"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\": rpc error: code = NotFound desc = could not find container \"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\": container with ID starting with 1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.959439 4746 scope.go:117] "RemoveContainer" containerID="76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.959806 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\": container with ID starting with 76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba not found: ID does not exist" containerID="76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.959829 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba"} err="failed to get container status \"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\": rpc error: code = NotFound desc = could not find container \"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\": container with ID starting with 76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.959843 4746 scope.go:117] "RemoveContainer" containerID="4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.960546 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\": container with ID starting with 4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a not found: ID does not exist" containerID="4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.960576 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a"} err="failed to get container status \"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\": rpc error: code = NotFound desc = could not find container \"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\": container with ID starting with 4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.960591 4746 scope.go:117] "RemoveContainer" containerID="7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.960963 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\": container with ID starting with 7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b not found: ID does not exist" containerID="7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.960984 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b"} err="failed to get container status \"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\": rpc error: code = NotFound desc = could not find container \"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\": container with ID starting with 7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.960996 4746 scope.go:117] "RemoveContainer" containerID="ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.961709 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\": container with ID starting with ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875 not found: ID does not exist" containerID="ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.961738 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875"} err="failed to get container status \"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\": rpc error: code = NotFound desc = could not find container \"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\": container with ID starting with ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.961756 4746 scope.go:117] "RemoveContainer" containerID="63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.962023 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\": container with ID starting with 63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d not found: ID does not exist" containerID="63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.962059 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d"} err="failed to get container status \"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\": rpc error: code = NotFound desc = could not find container \"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\": container with ID starting with 63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.962082 4746 scope.go:117] "RemoveContainer" containerID="fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350" Jan 03 03:26:32 crc kubenswrapper[4746]: E0103 03:26:32.962486 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\": container with ID starting with fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350 not found: ID does not exist" containerID="fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.962504 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350"} err="failed to get container status \"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\": rpc error: code = NotFound desc = could not find container \"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\": container with ID starting with fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.962517 4746 scope.go:117] "RemoveContainer" containerID="64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.962771 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11"} err="failed to get container status \"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11\": rpc error: code = NotFound desc = could not find container \"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11\": container with ID starting with 64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.962791 4746 scope.go:117] "RemoveContainer" containerID="73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.963024 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add"} err="failed to get container status \"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\": rpc error: code = NotFound desc = could not find container \"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\": container with ID starting with 73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.963044 4746 scope.go:117] "RemoveContainer" containerID="c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.963335 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f"} err="failed to get container status \"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\": rpc error: code = NotFound desc = could not find container \"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\": container with ID starting with c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.963353 4746 scope.go:117] "RemoveContainer" containerID="1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.963542 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77"} err="failed to get container status \"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\": rpc error: code = NotFound desc = could not find container \"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\": container with ID starting with 1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.963562 4746 scope.go:117] "RemoveContainer" containerID="76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.963847 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba"} err="failed to get container status \"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\": rpc error: code = NotFound desc = could not find container \"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\": container with ID starting with 76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.963864 4746 scope.go:117] "RemoveContainer" containerID="4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.964116 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a"} err="failed to get container status \"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\": rpc error: code = NotFound desc = could not find container \"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\": container with ID starting with 4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.964137 4746 scope.go:117] "RemoveContainer" containerID="7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.964810 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b"} err="failed to get container status \"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\": rpc error: code = NotFound desc = could not find container \"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\": container with ID starting with 7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.964831 4746 scope.go:117] "RemoveContainer" containerID="ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.965119 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875"} err="failed to get container status \"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\": rpc error: code = NotFound desc = could not find container \"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\": container with ID starting with ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.965137 4746 scope.go:117] "RemoveContainer" containerID="63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.965320 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d"} err="failed to get container status \"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\": rpc error: code = NotFound desc = could not find container \"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\": container with ID starting with 63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.965336 4746 scope.go:117] "RemoveContainer" containerID="fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.965529 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350"} err="failed to get container status \"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\": rpc error: code = NotFound desc = could not find container \"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\": container with ID starting with fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.965547 4746 scope.go:117] "RemoveContainer" containerID="64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.965943 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11"} err="failed to get container status \"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11\": rpc error: code = NotFound desc = could not find container \"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11\": container with ID starting with 64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.965962 4746 scope.go:117] "RemoveContainer" containerID="73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.966237 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add"} err="failed to get container status \"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\": rpc error: code = NotFound desc = could not find container \"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\": container with ID starting with 73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.966268 4746 scope.go:117] "RemoveContainer" containerID="c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.966509 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f"} err="failed to get container status \"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\": rpc error: code = NotFound desc = could not find container \"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\": container with ID starting with c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.966527 4746 scope.go:117] "RemoveContainer" containerID="1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.966870 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77"} err="failed to get container status \"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\": rpc error: code = NotFound desc = could not find container \"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\": container with ID starting with 1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.966891 4746 scope.go:117] "RemoveContainer" containerID="76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.967108 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba"} err="failed to get container status \"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\": rpc error: code = NotFound desc = could not find container \"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\": container with ID starting with 76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.967125 4746 scope.go:117] "RemoveContainer" containerID="4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.967327 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a"} err="failed to get container status \"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\": rpc error: code = NotFound desc = could not find container \"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\": container with ID starting with 4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.967351 4746 scope.go:117] "RemoveContainer" containerID="7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.967636 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b"} err="failed to get container status \"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\": rpc error: code = NotFound desc = could not find container \"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\": container with ID starting with 7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.967671 4746 scope.go:117] "RemoveContainer" containerID="ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.967986 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875"} err="failed to get container status \"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\": rpc error: code = NotFound desc = could not find container \"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\": container with ID starting with ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.968006 4746 scope.go:117] "RemoveContainer" containerID="63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.968205 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d"} err="failed to get container status \"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\": rpc error: code = NotFound desc = could not find container \"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\": container with ID starting with 63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.968223 4746 scope.go:117] "RemoveContainer" containerID="fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.968424 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350"} err="failed to get container status \"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\": rpc error: code = NotFound desc = could not find container \"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\": container with ID starting with fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.968440 4746 scope.go:117] "RemoveContainer" containerID="64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.968633 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11"} err="failed to get container status \"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11\": rpc error: code = NotFound desc = could not find container \"64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11\": container with ID starting with 64a651542ee2cfee73d34e4247f9e19b98bab774f8a673bca004508dece92d11 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.968694 4746 scope.go:117] "RemoveContainer" containerID="73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.968966 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add"} err="failed to get container status \"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\": rpc error: code = NotFound desc = could not find container \"73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add\": container with ID starting with 73b5625f9480b30a7d3b0151b5c13fa46ea852ca499b28ee9b10361624118add not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.968985 4746 scope.go:117] "RemoveContainer" containerID="c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.969271 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f"} err="failed to get container status \"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\": rpc error: code = NotFound desc = could not find container \"c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f\": container with ID starting with c4a9cab9560d68c7141cded19117e5dbc3ffb72b12becb27a0c6f36d746f727f not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.969286 4746 scope.go:117] "RemoveContainer" containerID="1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.969495 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77"} err="failed to get container status \"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\": rpc error: code = NotFound desc = could not find container \"1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77\": container with ID starting with 1d47266747ca73977add3d5341cfb6fc8e1951913fa5acc1574610707849ad77 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.969518 4746 scope.go:117] "RemoveContainer" containerID="76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.969743 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba"} err="failed to get container status \"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\": rpc error: code = NotFound desc = could not find container \"76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba\": container with ID starting with 76258f4eaa8114074ec9de2c7784566207c990cfc5fe8ebe486b8b3c052e83ba not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.969770 4746 scope.go:117] "RemoveContainer" containerID="4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.970034 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a"} err="failed to get container status \"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\": rpc error: code = NotFound desc = could not find container \"4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a\": container with ID starting with 4a6514e0cfe8e5cc5aad9a576e53d23dbd5b79c6d1392e527e4ceed82a03796a not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.970055 4746 scope.go:117] "RemoveContainer" containerID="7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.970299 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b"} err="failed to get container status \"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\": rpc error: code = NotFound desc = could not find container \"7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b\": container with ID starting with 7ef3dd612e2ee2f31702a00f7928506d5edda3a1a4d5c4419500ca73cd0a680b not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.970317 4746 scope.go:117] "RemoveContainer" containerID="ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.970581 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875"} err="failed to get container status \"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\": rpc error: code = NotFound desc = could not find container \"ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875\": container with ID starting with ad2ddf6f87d3d3a54131e198f79f416d06991cb13f9025601c77dd46c9a81875 not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.970628 4746 scope.go:117] "RemoveContainer" containerID="63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.970905 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d"} err="failed to get container status \"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\": rpc error: code = NotFound desc = could not find container \"63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d\": container with ID starting with 63314359bea0da7dfb0ba40e82c99744a48e7299d621567d20b4aa0b8e880a0d not found: ID does not exist" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.970924 4746 scope.go:117] "RemoveContainer" containerID="fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350" Jan 03 03:26:32 crc kubenswrapper[4746]: I0103 03:26:32.971124 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350"} err="failed to get container status \"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\": rpc error: code = NotFound desc = could not find container \"fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350\": container with ID starting with fd241e547e2c5e64ad62ce19d6ac623510702dbab9eb753a0bec17d48b6f5350 not found: ID does not exist" Jan 03 03:26:33 crc kubenswrapper[4746]: I0103 03:26:33.051359 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rzrbx"] Jan 03 03:26:33 crc kubenswrapper[4746]: I0103 03:26:33.054307 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rzrbx"] Jan 03 03:26:33 crc kubenswrapper[4746]: I0103 03:26:33.071468 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:33 crc kubenswrapper[4746]: I0103 03:26:33.727511 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-plg55_7938adea-5f3a-4bfa-8776-f8b06ce7219e/kube-multus/2.log" Jan 03 03:26:33 crc kubenswrapper[4746]: I0103 03:26:33.730007 4746 generic.go:334] "Generic (PLEG): container finished" podID="5453953a-20f7-4eba-8ab0-21328362c3c3" containerID="99e9d9923c69bcb3ba11e944347a2eee751d682d0800f4d01b5131f9c6826064" exitCode=0 Jan 03 03:26:33 crc kubenswrapper[4746]: I0103 03:26:33.730103 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" event={"ID":"5453953a-20f7-4eba-8ab0-21328362c3c3","Type":"ContainerDied","Data":"99e9d9923c69bcb3ba11e944347a2eee751d682d0800f4d01b5131f9c6826064"} Jan 03 03:26:33 crc kubenswrapper[4746]: I0103 03:26:33.730143 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" event={"ID":"5453953a-20f7-4eba-8ab0-21328362c3c3","Type":"ContainerStarted","Data":"19860bfa481d84fade1cbdd495a8e09c1726e64be145d92e73d519595c7c64ab"} Jan 03 03:26:34 crc kubenswrapper[4746]: I0103 03:26:34.472318 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a29410-e9d4-4c5a-98cb-e2c56b9170ff" path="/var/lib/kubelet/pods/a9a29410-e9d4-4c5a-98cb-e2c56b9170ff/volumes" Jan 03 03:26:34 crc kubenswrapper[4746]: I0103 03:26:34.741410 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" event={"ID":"5453953a-20f7-4eba-8ab0-21328362c3c3","Type":"ContainerStarted","Data":"3614cab53d3941629cfbbdf769479a7da4aef25ae66e087b4ed095d24d96343e"} Jan 03 03:26:34 crc kubenswrapper[4746]: I0103 03:26:34.741446 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" event={"ID":"5453953a-20f7-4eba-8ab0-21328362c3c3","Type":"ContainerStarted","Data":"7fe3a8d9c540cafe6599dcd7aedbae26645e3dc7f915a16f53c3e35c453b018d"} Jan 03 03:26:34 crc kubenswrapper[4746]: I0103 03:26:34.741456 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" event={"ID":"5453953a-20f7-4eba-8ab0-21328362c3c3","Type":"ContainerStarted","Data":"783c1f007131ef425b8b54ac690d9c76b9f52c1400e8f111a5fa1ecd58854bb1"} Jan 03 03:26:34 crc kubenswrapper[4746]: I0103 03:26:34.741464 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" event={"ID":"5453953a-20f7-4eba-8ab0-21328362c3c3","Type":"ContainerStarted","Data":"00d6773e070442c9435c41c4c6ec055748ca2b9f138536d112db49581a3409dd"} Jan 03 03:26:34 crc kubenswrapper[4746]: I0103 03:26:34.741473 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" event={"ID":"5453953a-20f7-4eba-8ab0-21328362c3c3","Type":"ContainerStarted","Data":"1b5a5b9c586ade4ab9f0bd64722eec5571710a918a1cdf94954e55eef191ead0"} Jan 03 03:26:34 crc kubenswrapper[4746]: I0103 03:26:34.741482 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" event={"ID":"5453953a-20f7-4eba-8ab0-21328362c3c3","Type":"ContainerStarted","Data":"6dc708f25add3a1612af8f45832b5ac74fb087c344a5e941fcae045da7538aba"} Jan 03 03:26:36 crc kubenswrapper[4746]: I0103 03:26:36.763680 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" event={"ID":"5453953a-20f7-4eba-8ab0-21328362c3c3","Type":"ContainerStarted","Data":"271bb55af92290bcccd693689961948fb33ff10077c2b94f258a2577c2cb4892"} Jan 03 03:26:39 crc kubenswrapper[4746]: I0103 03:26:39.788353 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" event={"ID":"5453953a-20f7-4eba-8ab0-21328362c3c3","Type":"ContainerStarted","Data":"4081b42d4559667d8c43f486f718b865cd25e891df6ff28f42a64ddf2b820cd3"} Jan 03 03:26:39 crc kubenswrapper[4746]: I0103 03:26:39.788822 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:39 crc kubenswrapper[4746]: I0103 03:26:39.788841 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:39 crc kubenswrapper[4746]: I0103 03:26:39.818805 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" podStartSLOduration=7.818786838 podStartE2EDuration="7.818786838s" podCreationTimestamp="2026-01-03 03:26:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:26:39.815539159 +0000 UTC m=+719.665429484" watchObservedRunningTime="2026-01-03 03:26:39.818786838 +0000 UTC m=+719.668677153" Jan 03 03:26:39 crc kubenswrapper[4746]: I0103 03:26:39.823582 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:40 crc kubenswrapper[4746]: I0103 03:26:40.796978 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:40 crc kubenswrapper[4746]: I0103 03:26:40.837516 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.485205 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl"] Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.487586 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.490695 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.494339 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl"] Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.678276 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl\" (UID: \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.678361 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl\" (UID: \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.678390 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d98cl\" (UniqueName: \"kubernetes.io/projected/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-kube-api-access-d98cl\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl\" (UID: \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.779757 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl\" (UID: \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.779808 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d98cl\" (UniqueName: \"kubernetes.io/projected/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-kube-api-access-d98cl\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl\" (UID: \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.779890 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl\" (UID: \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.780424 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl\" (UID: \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.780735 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl\" (UID: \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.803377 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d98cl\" (UniqueName: \"kubernetes.io/projected/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-kube-api-access-d98cl\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl\" (UID: \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: I0103 03:26:44.817642 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: E0103 03:26:44.843679 4746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_openshift-marketplace_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c_0(040d4fb4fb3e3798acc2e8173c49d599bc9ec86b71a945e218e9c79d6341e4b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 03:26:44 crc kubenswrapper[4746]: E0103 03:26:44.843766 4746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_openshift-marketplace_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c_0(040d4fb4fb3e3798acc2e8173c49d599bc9ec86b71a945e218e9c79d6341e4b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: E0103 03:26:44.843795 4746 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_openshift-marketplace_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c_0(040d4fb4fb3e3798acc2e8173c49d599bc9ec86b71a945e218e9c79d6341e4b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:44 crc kubenswrapper[4746]: E0103 03:26:44.843854 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_openshift-marketplace(3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_openshift-marketplace(3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_openshift-marketplace_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c_0(040d4fb4fb3e3798acc2e8173c49d599bc9ec86b71a945e218e9c79d6341e4b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" podUID="3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" Jan 03 03:26:45 crc kubenswrapper[4746]: I0103 03:26:45.465168 4746 scope.go:117] "RemoveContainer" containerID="54f9bfe29db937bd01a081ab29a78fa38cfa432fc695ab275c1daf35535f1a60" Jan 03 03:26:45 crc kubenswrapper[4746]: E0103 03:26:45.465646 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-plg55_openshift-multus(7938adea-5f3a-4bfa-8776-f8b06ce7219e)\"" pod="openshift-multus/multus-plg55" podUID="7938adea-5f3a-4bfa-8776-f8b06ce7219e" Jan 03 03:26:45 crc kubenswrapper[4746]: I0103 03:26:45.823121 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:45 crc kubenswrapper[4746]: I0103 03:26:45.824285 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:45 crc kubenswrapper[4746]: E0103 03:26:45.859312 4746 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_openshift-marketplace_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c_0(6614528347d49fcfe41cb704a3555d91c2c1dddf06fc78238c2a98dafc3d424e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 03 03:26:45 crc kubenswrapper[4746]: E0103 03:26:45.859399 4746 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_openshift-marketplace_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c_0(6614528347d49fcfe41cb704a3555d91c2c1dddf06fc78238c2a98dafc3d424e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:45 crc kubenswrapper[4746]: E0103 03:26:45.859435 4746 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_openshift-marketplace_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c_0(6614528347d49fcfe41cb704a3555d91c2c1dddf06fc78238c2a98dafc3d424e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:45 crc kubenswrapper[4746]: E0103 03:26:45.859513 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_openshift-marketplace(3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_openshift-marketplace(3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_openshift-marketplace_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c_0(6614528347d49fcfe41cb704a3555d91c2c1dddf06fc78238c2a98dafc3d424e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" podUID="3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" Jan 03 03:26:56 crc kubenswrapper[4746]: I0103 03:26:56.465040 4746 scope.go:117] "RemoveContainer" containerID="54f9bfe29db937bd01a081ab29a78fa38cfa432fc695ab275c1daf35535f1a60" Jan 03 03:26:56 crc kubenswrapper[4746]: I0103 03:26:56.898249 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-plg55_7938adea-5f3a-4bfa-8776-f8b06ce7219e/kube-multus/2.log" Jan 03 03:26:56 crc kubenswrapper[4746]: I0103 03:26:56.898545 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-plg55" event={"ID":"7938adea-5f3a-4bfa-8776-f8b06ce7219e","Type":"ContainerStarted","Data":"df158200e11a00eea3c2c2c932cf7070cb6451d604609591840eaa576475492a"} Jan 03 03:26:59 crc kubenswrapper[4746]: I0103 03:26:59.464449 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:59 crc kubenswrapper[4746]: I0103 03:26:59.464979 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:26:59 crc kubenswrapper[4746]: I0103 03:26:59.711547 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl"] Jan 03 03:26:59 crc kubenswrapper[4746]: W0103 03:26:59.714920 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eb2322f_9ccb_4d4f_8b59_13ef378eaf2c.slice/crio-88145bcca29e4c667393f9f8c453e91c6721de8de54fd0e290ccaa0f7f359466 WatchSource:0}: Error finding container 88145bcca29e4c667393f9f8c453e91c6721de8de54fd0e290ccaa0f7f359466: Status 404 returned error can't find the container with id 88145bcca29e4c667393f9f8c453e91c6721de8de54fd0e290ccaa0f7f359466 Jan 03 03:26:59 crc kubenswrapper[4746]: I0103 03:26:59.917002 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" event={"ID":"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c","Type":"ContainerStarted","Data":"f58b1458eaead36fdd5ca44cf3c9f12146064ecb86d24dc79c166a6760103073"} Jan 03 03:26:59 crc kubenswrapper[4746]: I0103 03:26:59.917042 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" event={"ID":"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c","Type":"ContainerStarted","Data":"88145bcca29e4c667393f9f8c453e91c6721de8de54fd0e290ccaa0f7f359466"} Jan 03 03:27:00 crc kubenswrapper[4746]: I0103 03:27:00.926887 4746 generic.go:334] "Generic (PLEG): container finished" podID="3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" containerID="f58b1458eaead36fdd5ca44cf3c9f12146064ecb86d24dc79c166a6760103073" exitCode=0 Jan 03 03:27:00 crc kubenswrapper[4746]: I0103 03:27:00.926926 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" event={"ID":"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c","Type":"ContainerDied","Data":"f58b1458eaead36fdd5ca44cf3c9f12146064ecb86d24dc79c166a6760103073"} Jan 03 03:27:00 crc kubenswrapper[4746]: I0103 03:27:00.929334 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 03:27:01 crc kubenswrapper[4746]: I0103 03:27:01.374188 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:27:01 crc kubenswrapper[4746]: I0103 03:27:01.374262 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:27:01 crc kubenswrapper[4746]: I0103 03:27:01.374326 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:27:01 crc kubenswrapper[4746]: I0103 03:27:01.375035 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e73d799a311783ed2ed25907dcb1be6ade63e15caa315b94224accb77b9a4df"} pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 03:27:01 crc kubenswrapper[4746]: I0103 03:27:01.375113 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" containerID="cri-o://4e73d799a311783ed2ed25907dcb1be6ade63e15caa315b94224accb77b9a4df" gracePeriod=600 Jan 03 03:27:01 crc kubenswrapper[4746]: I0103 03:27:01.935442 4746 generic.go:334] "Generic (PLEG): container finished" podID="00b3b853-9953-4039-964d-841a01708848" containerID="4e73d799a311783ed2ed25907dcb1be6ade63e15caa315b94224accb77b9a4df" exitCode=0 Jan 03 03:27:01 crc kubenswrapper[4746]: I0103 03:27:01.935557 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerDied","Data":"4e73d799a311783ed2ed25907dcb1be6ade63e15caa315b94224accb77b9a4df"} Jan 03 03:27:01 crc kubenswrapper[4746]: I0103 03:27:01.935921 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerStarted","Data":"bf02736da0e4a31633cefadb1cc120b93c49d7b864f32b5d90a19ffe5e5a589f"} Jan 03 03:27:01 crc kubenswrapper[4746]: I0103 03:27:01.935943 4746 scope.go:117] "RemoveContainer" containerID="351b52f9f234b797a950052d8e305243a4430d3a8e63c889b349db04c9738ec9" Jan 03 03:27:02 crc kubenswrapper[4746]: I0103 03:27:02.942789 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" event={"ID":"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c","Type":"ContainerStarted","Data":"315833bf8fff54e66b07da4de438377cd1f4a2603d57213d723bd5d0aa31d18a"} Jan 03 03:27:03 crc kubenswrapper[4746]: I0103 03:27:03.093272 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mvbbf" Jan 03 03:27:04 crc kubenswrapper[4746]: I0103 03:27:04.963266 4746 generic.go:334] "Generic (PLEG): container finished" podID="3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" containerID="315833bf8fff54e66b07da4de438377cd1f4a2603d57213d723bd5d0aa31d18a" exitCode=0 Jan 03 03:27:04 crc kubenswrapper[4746]: I0103 03:27:04.963358 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" event={"ID":"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c","Type":"ContainerDied","Data":"315833bf8fff54e66b07da4de438377cd1f4a2603d57213d723bd5d0aa31d18a"} Jan 03 03:27:05 crc kubenswrapper[4746]: I0103 03:27:05.976096 4746 generic.go:334] "Generic (PLEG): container finished" podID="3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" containerID="2c8d605251b877f1adc6c5ebe45c370b37bf6423ee28f5e01cbed7c8815d7e28" exitCode=0 Jan 03 03:27:05 crc kubenswrapper[4746]: I0103 03:27:05.976215 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" event={"ID":"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c","Type":"ContainerDied","Data":"2c8d605251b877f1adc6c5ebe45c370b37bf6423ee28f5e01cbed7c8815d7e28"} Jan 03 03:27:07 crc kubenswrapper[4746]: I0103 03:27:07.328716 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:27:07 crc kubenswrapper[4746]: I0103 03:27:07.394742 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d98cl\" (UniqueName: \"kubernetes.io/projected/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-kube-api-access-d98cl\") pod \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\" (UID: \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\") " Jan 03 03:27:07 crc kubenswrapper[4746]: I0103 03:27:07.395104 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-util\") pod \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\" (UID: \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\") " Jan 03 03:27:07 crc kubenswrapper[4746]: I0103 03:27:07.395214 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-bundle\") pod \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\" (UID: \"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c\") " Jan 03 03:27:07 crc kubenswrapper[4746]: I0103 03:27:07.420000 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-kube-api-access-d98cl" (OuterVolumeSpecName: "kube-api-access-d98cl") pod "3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" (UID: "3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c"). InnerVolumeSpecName "kube-api-access-d98cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:27:07 crc kubenswrapper[4746]: I0103 03:27:07.422338 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-bundle" (OuterVolumeSpecName: "bundle") pod "3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" (UID: "3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:27:07 crc kubenswrapper[4746]: I0103 03:27:07.432870 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-util" (OuterVolumeSpecName: "util") pod "3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" (UID: "3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:27:07 crc kubenswrapper[4746]: I0103 03:27:07.496473 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:27:07 crc kubenswrapper[4746]: I0103 03:27:07.496529 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d98cl\" (UniqueName: \"kubernetes.io/projected/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-kube-api-access-d98cl\") on node \"crc\" DevicePath \"\"" Jan 03 03:27:07 crc kubenswrapper[4746]: I0103 03:27:07.496547 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c-util\") on node \"crc\" DevicePath \"\"" Jan 03 03:27:07 crc kubenswrapper[4746]: I0103 03:27:07.999829 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" event={"ID":"3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c","Type":"ContainerDied","Data":"88145bcca29e4c667393f9f8c453e91c6721de8de54fd0e290ccaa0f7f359466"} Jan 03 03:27:07 crc kubenswrapper[4746]: I0103 03:27:07.999928 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88145bcca29e4c667393f9f8c453e91c6721de8de54fd0e290ccaa0f7f359466" Jan 03 03:27:08 crc kubenswrapper[4746]: I0103 03:27:08.000090 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.773162 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd"] Jan 03 03:27:17 crc kubenswrapper[4746]: E0103 03:27:17.774046 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" containerName="util" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.774065 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" containerName="util" Jan 03 03:27:17 crc kubenswrapper[4746]: E0103 03:27:17.774090 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" containerName="pull" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.774096 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" containerName="pull" Jan 03 03:27:17 crc kubenswrapper[4746]: E0103 03:27:17.774114 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" containerName="extract" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.774121 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" containerName="extract" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.774238 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c" containerName="extract" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.774654 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.780435 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.780544 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.780686 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-phkk5" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.780843 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.780890 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.841820 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd"] Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.960440 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6cb321dd-38a5-424e-a99d-f6594f2aa06e-webhook-cert\") pod \"metallb-operator-controller-manager-5bbcffc974-mlzdd\" (UID: \"6cb321dd-38a5-424e-a99d-f6594f2aa06e\") " pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.960505 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6cb321dd-38a5-424e-a99d-f6594f2aa06e-apiservice-cert\") pod \"metallb-operator-controller-manager-5bbcffc974-mlzdd\" (UID: \"6cb321dd-38a5-424e-a99d-f6594f2aa06e\") " pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:17 crc kubenswrapper[4746]: I0103 03:27:17.960531 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7x4\" (UniqueName: \"kubernetes.io/projected/6cb321dd-38a5-424e-a99d-f6594f2aa06e-kube-api-access-pp7x4\") pod \"metallb-operator-controller-manager-5bbcffc974-mlzdd\" (UID: \"6cb321dd-38a5-424e-a99d-f6594f2aa06e\") " pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.000447 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph"] Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.001380 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.003989 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.004070 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.004593 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7ps9b" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.031973 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph"] Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.061954 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6cb321dd-38a5-424e-a99d-f6594f2aa06e-webhook-cert\") pod \"metallb-operator-controller-manager-5bbcffc974-mlzdd\" (UID: \"6cb321dd-38a5-424e-a99d-f6594f2aa06e\") " pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.062045 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6cb321dd-38a5-424e-a99d-f6594f2aa06e-apiservice-cert\") pod \"metallb-operator-controller-manager-5bbcffc974-mlzdd\" (UID: \"6cb321dd-38a5-424e-a99d-f6594f2aa06e\") " pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.062068 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp7x4\" (UniqueName: \"kubernetes.io/projected/6cb321dd-38a5-424e-a99d-f6594f2aa06e-kube-api-access-pp7x4\") pod \"metallb-operator-controller-manager-5bbcffc974-mlzdd\" (UID: \"6cb321dd-38a5-424e-a99d-f6594f2aa06e\") " pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.068400 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6cb321dd-38a5-424e-a99d-f6594f2aa06e-webhook-cert\") pod \"metallb-operator-controller-manager-5bbcffc974-mlzdd\" (UID: \"6cb321dd-38a5-424e-a99d-f6594f2aa06e\") " pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.068469 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6cb321dd-38a5-424e-a99d-f6594f2aa06e-apiservice-cert\") pod \"metallb-operator-controller-manager-5bbcffc974-mlzdd\" (UID: \"6cb321dd-38a5-424e-a99d-f6594f2aa06e\") " pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.111442 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp7x4\" (UniqueName: \"kubernetes.io/projected/6cb321dd-38a5-424e-a99d-f6594f2aa06e-kube-api-access-pp7x4\") pod \"metallb-operator-controller-manager-5bbcffc974-mlzdd\" (UID: \"6cb321dd-38a5-424e-a99d-f6594f2aa06e\") " pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.163101 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bbecca3d-7406-432b-995f-9a7ef95f6c01-apiservice-cert\") pod \"metallb-operator-webhook-server-5657bbd6cc-tnqph\" (UID: \"bbecca3d-7406-432b-995f-9a7ef95f6c01\") " pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.163165 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bbecca3d-7406-432b-995f-9a7ef95f6c01-webhook-cert\") pod \"metallb-operator-webhook-server-5657bbd6cc-tnqph\" (UID: \"bbecca3d-7406-432b-995f-9a7ef95f6c01\") " pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.163212 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7mhx\" (UniqueName: \"kubernetes.io/projected/bbecca3d-7406-432b-995f-9a7ef95f6c01-kube-api-access-h7mhx\") pod \"metallb-operator-webhook-server-5657bbd6cc-tnqph\" (UID: \"bbecca3d-7406-432b-995f-9a7ef95f6c01\") " pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.264698 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bbecca3d-7406-432b-995f-9a7ef95f6c01-webhook-cert\") pod \"metallb-operator-webhook-server-5657bbd6cc-tnqph\" (UID: \"bbecca3d-7406-432b-995f-9a7ef95f6c01\") " pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.264797 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7mhx\" (UniqueName: \"kubernetes.io/projected/bbecca3d-7406-432b-995f-9a7ef95f6c01-kube-api-access-h7mhx\") pod \"metallb-operator-webhook-server-5657bbd6cc-tnqph\" (UID: \"bbecca3d-7406-432b-995f-9a7ef95f6c01\") " pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.264870 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bbecca3d-7406-432b-995f-9a7ef95f6c01-apiservice-cert\") pod \"metallb-operator-webhook-server-5657bbd6cc-tnqph\" (UID: \"bbecca3d-7406-432b-995f-9a7ef95f6c01\") " pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.269464 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bbecca3d-7406-432b-995f-9a7ef95f6c01-webhook-cert\") pod \"metallb-operator-webhook-server-5657bbd6cc-tnqph\" (UID: \"bbecca3d-7406-432b-995f-9a7ef95f6c01\") " pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.269491 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bbecca3d-7406-432b-995f-9a7ef95f6c01-apiservice-cert\") pod \"metallb-operator-webhook-server-5657bbd6cc-tnqph\" (UID: \"bbecca3d-7406-432b-995f-9a7ef95f6c01\") " pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.284474 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7mhx\" (UniqueName: \"kubernetes.io/projected/bbecca3d-7406-432b-995f-9a7ef95f6c01-kube-api-access-h7mhx\") pod \"metallb-operator-webhook-server-5657bbd6cc-tnqph\" (UID: \"bbecca3d-7406-432b-995f-9a7ef95f6c01\") " pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.316487 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.391203 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.539707 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph"] Jan 03 03:27:18 crc kubenswrapper[4746]: W0103 03:27:18.547458 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbecca3d_7406_432b_995f_9a7ef95f6c01.slice/crio-3cde66216e24465be416eb29c9d13e6bce198782e8de96831c1a5ed45bcef05a WatchSource:0}: Error finding container 3cde66216e24465be416eb29c9d13e6bce198782e8de96831c1a5ed45bcef05a: Status 404 returned error can't find the container with id 3cde66216e24465be416eb29c9d13e6bce198782e8de96831c1a5ed45bcef05a Jan 03 03:27:18 crc kubenswrapper[4746]: I0103 03:27:18.611565 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd"] Jan 03 03:27:19 crc kubenswrapper[4746]: I0103 03:27:19.077129 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" event={"ID":"bbecca3d-7406-432b-995f-9a7ef95f6c01","Type":"ContainerStarted","Data":"3cde66216e24465be416eb29c9d13e6bce198782e8de96831c1a5ed45bcef05a"} Jan 03 03:27:19 crc kubenswrapper[4746]: I0103 03:27:19.078982 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" event={"ID":"6cb321dd-38a5-424e-a99d-f6594f2aa06e","Type":"ContainerStarted","Data":"487706aaab0ef88de545d6ccc294cc12f2da9312bcb8af6c234c5d2a0a8c57e9"} Jan 03 03:27:19 crc kubenswrapper[4746]: I0103 03:27:19.961735 4746 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 03 03:27:26 crc kubenswrapper[4746]: I0103 03:27:26.118679 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" event={"ID":"6cb321dd-38a5-424e-a99d-f6594f2aa06e","Type":"ContainerStarted","Data":"93236f7988e7c1c3139a6fd2b618c6e0fda2f33fb37f41799ccd7c3ccb5dc9e8"} Jan 03 03:27:26 crc kubenswrapper[4746]: I0103 03:27:26.119319 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:26 crc kubenswrapper[4746]: I0103 03:27:26.120749 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" event={"ID":"bbecca3d-7406-432b-995f-9a7ef95f6c01","Type":"ContainerStarted","Data":"c138a4eba0885f684168985c386b56e7e9bdb3e88a0be85b3078fdd9a0be00dd"} Jan 03 03:27:26 crc kubenswrapper[4746]: I0103 03:27:26.120898 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:26 crc kubenswrapper[4746]: I0103 03:27:26.144305 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" podStartSLOduration=2.054247549 podStartE2EDuration="9.144283689s" podCreationTimestamp="2026-01-03 03:27:17 +0000 UTC" firstStartedPulling="2026-01-03 03:27:18.629727981 +0000 UTC m=+758.479618296" lastFinishedPulling="2026-01-03 03:27:25.719764131 +0000 UTC m=+765.569654436" observedRunningTime="2026-01-03 03:27:26.143576352 +0000 UTC m=+765.993466677" watchObservedRunningTime="2026-01-03 03:27:26.144283689 +0000 UTC m=+765.994174034" Jan 03 03:27:26 crc kubenswrapper[4746]: I0103 03:27:26.170546 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" podStartSLOduration=1.986699475 podStartE2EDuration="9.170517778s" podCreationTimestamp="2026-01-03 03:27:17 +0000 UTC" firstStartedPulling="2026-01-03 03:27:18.557151844 +0000 UTC m=+758.407042149" lastFinishedPulling="2026-01-03 03:27:25.740970147 +0000 UTC m=+765.590860452" observedRunningTime="2026-01-03 03:27:26.165093386 +0000 UTC m=+766.014983691" watchObservedRunningTime="2026-01-03 03:27:26.170517778 +0000 UTC m=+766.020408113" Jan 03 03:27:38 crc kubenswrapper[4746]: I0103 03:27:38.322775 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5657bbd6cc-tnqph" Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.052375 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rk8lf"] Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.053853 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.062636 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rk8lf"] Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.117278 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21b1307-1774-4354-aa02-7bf6da6f0d94-catalog-content\") pod \"redhat-marketplace-rk8lf\" (UID: \"c21b1307-1774-4354-aa02-7bf6da6f0d94\") " pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.117420 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjntk\" (UniqueName: \"kubernetes.io/projected/c21b1307-1774-4354-aa02-7bf6da6f0d94-kube-api-access-bjntk\") pod \"redhat-marketplace-rk8lf\" (UID: \"c21b1307-1774-4354-aa02-7bf6da6f0d94\") " pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.117472 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21b1307-1774-4354-aa02-7bf6da6f0d94-utilities\") pod \"redhat-marketplace-rk8lf\" (UID: \"c21b1307-1774-4354-aa02-7bf6da6f0d94\") " pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.218478 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjntk\" (UniqueName: \"kubernetes.io/projected/c21b1307-1774-4354-aa02-7bf6da6f0d94-kube-api-access-bjntk\") pod \"redhat-marketplace-rk8lf\" (UID: \"c21b1307-1774-4354-aa02-7bf6da6f0d94\") " pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.218520 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21b1307-1774-4354-aa02-7bf6da6f0d94-utilities\") pod \"redhat-marketplace-rk8lf\" (UID: \"c21b1307-1774-4354-aa02-7bf6da6f0d94\") " pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.218574 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21b1307-1774-4354-aa02-7bf6da6f0d94-catalog-content\") pod \"redhat-marketplace-rk8lf\" (UID: \"c21b1307-1774-4354-aa02-7bf6da6f0d94\") " pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.219022 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21b1307-1774-4354-aa02-7bf6da6f0d94-catalog-content\") pod \"redhat-marketplace-rk8lf\" (UID: \"c21b1307-1774-4354-aa02-7bf6da6f0d94\") " pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.219189 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21b1307-1774-4354-aa02-7bf6da6f0d94-utilities\") pod \"redhat-marketplace-rk8lf\" (UID: \"c21b1307-1774-4354-aa02-7bf6da6f0d94\") " pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.244008 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjntk\" (UniqueName: \"kubernetes.io/projected/c21b1307-1774-4354-aa02-7bf6da6f0d94-kube-api-access-bjntk\") pod \"redhat-marketplace-rk8lf\" (UID: \"c21b1307-1774-4354-aa02-7bf6da6f0d94\") " pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.367898 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:48 crc kubenswrapper[4746]: I0103 03:27:48.599424 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rk8lf"] Jan 03 03:27:48 crc kubenswrapper[4746]: W0103 03:27:48.606970 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc21b1307_1774_4354_aa02_7bf6da6f0d94.slice/crio-f6ac7b8f0c33e537432934900339499800b33655925aadc4b2d7a55f968dac6f WatchSource:0}: Error finding container f6ac7b8f0c33e537432934900339499800b33655925aadc4b2d7a55f968dac6f: Status 404 returned error can't find the container with id f6ac7b8f0c33e537432934900339499800b33655925aadc4b2d7a55f968dac6f Jan 03 03:27:49 crc kubenswrapper[4746]: I0103 03:27:49.546848 4746 generic.go:334] "Generic (PLEG): container finished" podID="c21b1307-1774-4354-aa02-7bf6da6f0d94" containerID="f8f25e77deb38e8a8d93ef04be8e1bdc6688fc74179d91306c4f368594650585" exitCode=0 Jan 03 03:27:49 crc kubenswrapper[4746]: I0103 03:27:49.547721 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rk8lf" event={"ID":"c21b1307-1774-4354-aa02-7bf6da6f0d94","Type":"ContainerDied","Data":"f8f25e77deb38e8a8d93ef04be8e1bdc6688fc74179d91306c4f368594650585"} Jan 03 03:27:49 crc kubenswrapper[4746]: I0103 03:27:49.547781 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rk8lf" event={"ID":"c21b1307-1774-4354-aa02-7bf6da6f0d94","Type":"ContainerStarted","Data":"f6ac7b8f0c33e537432934900339499800b33655925aadc4b2d7a55f968dac6f"} Jan 03 03:27:50 crc kubenswrapper[4746]: I0103 03:27:50.553109 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rk8lf" event={"ID":"c21b1307-1774-4354-aa02-7bf6da6f0d94","Type":"ContainerStarted","Data":"c7fee3d294e3c17ac19568726a0b5fdc14f74d72e3619bdef99d03851119a17d"} Jan 03 03:27:51 crc kubenswrapper[4746]: I0103 03:27:51.561605 4746 generic.go:334] "Generic (PLEG): container finished" podID="c21b1307-1774-4354-aa02-7bf6da6f0d94" containerID="c7fee3d294e3c17ac19568726a0b5fdc14f74d72e3619bdef99d03851119a17d" exitCode=0 Jan 03 03:27:51 crc kubenswrapper[4746]: I0103 03:27:51.561809 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rk8lf" event={"ID":"c21b1307-1774-4354-aa02-7bf6da6f0d94","Type":"ContainerDied","Data":"c7fee3d294e3c17ac19568726a0b5fdc14f74d72e3619bdef99d03851119a17d"} Jan 03 03:27:52 crc kubenswrapper[4746]: I0103 03:27:52.571942 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rk8lf" event={"ID":"c21b1307-1774-4354-aa02-7bf6da6f0d94","Type":"ContainerStarted","Data":"284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70"} Jan 03 03:27:52 crc kubenswrapper[4746]: I0103 03:27:52.590508 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rk8lf" podStartSLOduration=2.173981564 podStartE2EDuration="4.590478123s" podCreationTimestamp="2026-01-03 03:27:48 +0000 UTC" firstStartedPulling="2026-01-03 03:27:49.549554493 +0000 UTC m=+789.399444808" lastFinishedPulling="2026-01-03 03:27:51.966051032 +0000 UTC m=+791.815941367" observedRunningTime="2026-01-03 03:27:52.589204552 +0000 UTC m=+792.439094887" watchObservedRunningTime="2026-01-03 03:27:52.590478123 +0000 UTC m=+792.440368468" Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.019441 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mq5jq"] Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.020837 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.035385 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mq5jq"] Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.095333 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89806220-a6e9-4ad8-b661-1d02bb829caa-utilities\") pod \"certified-operators-mq5jq\" (UID: \"89806220-a6e9-4ad8-b661-1d02bb829caa\") " pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.095369 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89806220-a6e9-4ad8-b661-1d02bb829caa-catalog-content\") pod \"certified-operators-mq5jq\" (UID: \"89806220-a6e9-4ad8-b661-1d02bb829caa\") " pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.095420 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xckpw\" (UniqueName: \"kubernetes.io/projected/89806220-a6e9-4ad8-b661-1d02bb829caa-kube-api-access-xckpw\") pod \"certified-operators-mq5jq\" (UID: \"89806220-a6e9-4ad8-b661-1d02bb829caa\") " pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.196600 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89806220-a6e9-4ad8-b661-1d02bb829caa-utilities\") pod \"certified-operators-mq5jq\" (UID: \"89806220-a6e9-4ad8-b661-1d02bb829caa\") " pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.196643 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89806220-a6e9-4ad8-b661-1d02bb829caa-catalog-content\") pod \"certified-operators-mq5jq\" (UID: \"89806220-a6e9-4ad8-b661-1d02bb829caa\") " pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.196709 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xckpw\" (UniqueName: \"kubernetes.io/projected/89806220-a6e9-4ad8-b661-1d02bb829caa-kube-api-access-xckpw\") pod \"certified-operators-mq5jq\" (UID: \"89806220-a6e9-4ad8-b661-1d02bb829caa\") " pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.197267 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89806220-a6e9-4ad8-b661-1d02bb829caa-utilities\") pod \"certified-operators-mq5jq\" (UID: \"89806220-a6e9-4ad8-b661-1d02bb829caa\") " pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.197290 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89806220-a6e9-4ad8-b661-1d02bb829caa-catalog-content\") pod \"certified-operators-mq5jq\" (UID: \"89806220-a6e9-4ad8-b661-1d02bb829caa\") " pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.219208 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xckpw\" (UniqueName: \"kubernetes.io/projected/89806220-a6e9-4ad8-b661-1d02bb829caa-kube-api-access-xckpw\") pod \"certified-operators-mq5jq\" (UID: \"89806220-a6e9-4ad8-b661-1d02bb829caa\") " pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.351015 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:27:54 crc kubenswrapper[4746]: I0103 03:27:54.743065 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mq5jq"] Jan 03 03:27:54 crc kubenswrapper[4746]: W0103 03:27:54.761118 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89806220_a6e9_4ad8_b661_1d02bb829caa.slice/crio-7e482f42132d2d05e0a16e69b9c233ef4e1888fdfd0832f88584caa847eeb058 WatchSource:0}: Error finding container 7e482f42132d2d05e0a16e69b9c233ef4e1888fdfd0832f88584caa847eeb058: Status 404 returned error can't find the container with id 7e482f42132d2d05e0a16e69b9c233ef4e1888fdfd0832f88584caa847eeb058 Jan 03 03:27:55 crc kubenswrapper[4746]: I0103 03:27:55.599000 4746 generic.go:334] "Generic (PLEG): container finished" podID="89806220-a6e9-4ad8-b661-1d02bb829caa" containerID="32892c42a73b55f46f08644d26c7180b58a1f870ca3294185b0df333d99bc782" exitCode=0 Jan 03 03:27:55 crc kubenswrapper[4746]: I0103 03:27:55.599059 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq5jq" event={"ID":"89806220-a6e9-4ad8-b661-1d02bb829caa","Type":"ContainerDied","Data":"32892c42a73b55f46f08644d26c7180b58a1f870ca3294185b0df333d99bc782"} Jan 03 03:27:55 crc kubenswrapper[4746]: I0103 03:27:55.599359 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq5jq" event={"ID":"89806220-a6e9-4ad8-b661-1d02bb829caa","Type":"ContainerStarted","Data":"7e482f42132d2d05e0a16e69b9c233ef4e1888fdfd0832f88584caa847eeb058"} Jan 03 03:27:57 crc kubenswrapper[4746]: I0103 03:27:57.610963 4746 generic.go:334] "Generic (PLEG): container finished" podID="89806220-a6e9-4ad8-b661-1d02bb829caa" containerID="d7d9414a31c74bc84c9717d087dacdeea71350b50dabe21de40199b3ca0bb80b" exitCode=0 Jan 03 03:27:57 crc kubenswrapper[4746]: I0103 03:27:57.611052 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq5jq" event={"ID":"89806220-a6e9-4ad8-b661-1d02bb829caa","Type":"ContainerDied","Data":"d7d9414a31c74bc84c9717d087dacdeea71350b50dabe21de40199b3ca0bb80b"} Jan 03 03:27:58 crc kubenswrapper[4746]: I0103 03:27:58.368685 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:58 crc kubenswrapper[4746]: I0103 03:27:58.368748 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:58 crc kubenswrapper[4746]: I0103 03:27:58.393629 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5bbcffc974-mlzdd" Jan 03 03:27:58 crc kubenswrapper[4746]: I0103 03:27:58.462189 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:58 crc kubenswrapper[4746]: I0103 03:27:58.619632 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq5jq" event={"ID":"89806220-a6e9-4ad8-b661-1d02bb829caa","Type":"ContainerStarted","Data":"5fa6937333d9e755ef14c681237c01d4b196e5408f39ebf58de73429a5b4e82a"} Jan 03 03:27:58 crc kubenswrapper[4746]: I0103 03:27:58.638529 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mq5jq" podStartSLOduration=2.112532895 podStartE2EDuration="4.638511357s" podCreationTimestamp="2026-01-03 03:27:54 +0000 UTC" firstStartedPulling="2026-01-03 03:27:55.600930699 +0000 UTC m=+795.450821024" lastFinishedPulling="2026-01-03 03:27:58.126909181 +0000 UTC m=+797.976799486" observedRunningTime="2026-01-03 03:27:58.637445711 +0000 UTC m=+798.487336016" watchObservedRunningTime="2026-01-03 03:27:58.638511357 +0000 UTC m=+798.488401662" Jan 03 03:27:58 crc kubenswrapper[4746]: I0103 03:27:58.669324 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.058186 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-clkjf"] Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.060405 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.062666 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-rn9ml" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.063840 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.065796 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw"] Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.066831 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.068081 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.069058 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.085550 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw"] Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.139096 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gsdzz"] Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.140465 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.144055 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.144067 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.144216 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7j4m2" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.145279 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.158675 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-tz4v5"] Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.159819 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.165985 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.169420 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c58c9579-76cf-457e-a5da-ba83edbf0960-reloader\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.169466 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c58c9579-76cf-457e-a5da-ba83edbf0960-frr-startup\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.169495 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sktdq\" (UniqueName: \"kubernetes.io/projected/2883eb8b-d6db-4ede-bf40-cb8aee643105-kube-api-access-sktdq\") pod \"frr-k8s-webhook-server-7784b6fcf-ztjlw\" (UID: \"2883eb8b-d6db-4ede-bf40-cb8aee643105\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.169714 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c58c9579-76cf-457e-a5da-ba83edbf0960-metrics-certs\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.169771 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c58c9579-76cf-457e-a5da-ba83edbf0960-metrics\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.169844 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4wms\" (UniqueName: \"kubernetes.io/projected/c58c9579-76cf-457e-a5da-ba83edbf0960-kube-api-access-l4wms\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.169896 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c58c9579-76cf-457e-a5da-ba83edbf0960-frr-conf\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.169932 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c58c9579-76cf-457e-a5da-ba83edbf0960-frr-sockets\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.169954 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2883eb8b-d6db-4ede-bf40-cb8aee643105-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-ztjlw\" (UID: \"2883eb8b-d6db-4ede-bf40-cb8aee643105\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.171223 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-tz4v5"] Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.271587 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ebc1074-93c2-408f-bad5-0392529562c7-metrics-certs\") pod \"controller-5bddd4b946-tz4v5\" (UID: \"1ebc1074-93c2-408f-bad5-0392529562c7\") " pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.271631 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c58c9579-76cf-457e-a5da-ba83edbf0960-metrics\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.271676 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-memberlist\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.271802 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4wms\" (UniqueName: \"kubernetes.io/projected/c58c9579-76cf-457e-a5da-ba83edbf0960-kube-api-access-l4wms\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.271859 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ebc1074-93c2-408f-bad5-0392529562c7-cert\") pod \"controller-5bddd4b946-tz4v5\" (UID: \"1ebc1074-93c2-408f-bad5-0392529562c7\") " pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.271890 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c58c9579-76cf-457e-a5da-ba83edbf0960-frr-conf\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.271909 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-metrics-certs\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.271935 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c58c9579-76cf-457e-a5da-ba83edbf0960-frr-sockets\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.271958 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2883eb8b-d6db-4ede-bf40-cb8aee643105-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-ztjlw\" (UID: \"2883eb8b-d6db-4ede-bf40-cb8aee643105\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.271998 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c58c9579-76cf-457e-a5da-ba83edbf0960-reloader\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.272120 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c58c9579-76cf-457e-a5da-ba83edbf0960-metrics\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.272127 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c58c9579-76cf-457e-a5da-ba83edbf0960-frr-startup\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: E0103 03:27:59.272177 4746 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 03 03:27:59 crc kubenswrapper[4746]: E0103 03:27:59.272256 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2883eb8b-d6db-4ede-bf40-cb8aee643105-cert podName:2883eb8b-d6db-4ede-bf40-cb8aee643105 nodeName:}" failed. No retries permitted until 2026-01-03 03:27:59.772233964 +0000 UTC m=+799.622124269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2883eb8b-d6db-4ede-bf40-cb8aee643105-cert") pod "frr-k8s-webhook-server-7784b6fcf-ztjlw" (UID: "2883eb8b-d6db-4ede-bf40-cb8aee643105") : secret "frr-k8s-webhook-server-cert" not found Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.272181 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbhh4\" (UniqueName: \"kubernetes.io/projected/1ebc1074-93c2-408f-bad5-0392529562c7-kube-api-access-sbhh4\") pod \"controller-5bddd4b946-tz4v5\" (UID: \"1ebc1074-93c2-408f-bad5-0392529562c7\") " pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.272282 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c58c9579-76cf-457e-a5da-ba83edbf0960-frr-conf\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.272333 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-metallb-excludel2\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.272401 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sktdq\" (UniqueName: \"kubernetes.io/projected/2883eb8b-d6db-4ede-bf40-cb8aee643105-kube-api-access-sktdq\") pod \"frr-k8s-webhook-server-7784b6fcf-ztjlw\" (UID: \"2883eb8b-d6db-4ede-bf40-cb8aee643105\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.272458 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72zqx\" (UniqueName: \"kubernetes.io/projected/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-kube-api-access-72zqx\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.272511 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c58c9579-76cf-457e-a5da-ba83edbf0960-metrics-certs\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.272554 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c58c9579-76cf-457e-a5da-ba83edbf0960-reloader\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.272625 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c58c9579-76cf-457e-a5da-ba83edbf0960-frr-sockets\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: E0103 03:27:59.272671 4746 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 03 03:27:59 crc kubenswrapper[4746]: E0103 03:27:59.272727 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c58c9579-76cf-457e-a5da-ba83edbf0960-metrics-certs podName:c58c9579-76cf-457e-a5da-ba83edbf0960 nodeName:}" failed. No retries permitted until 2026-01-03 03:27:59.772711526 +0000 UTC m=+799.622601831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c58c9579-76cf-457e-a5da-ba83edbf0960-metrics-certs") pod "frr-k8s-clkjf" (UID: "c58c9579-76cf-457e-a5da-ba83edbf0960") : secret "frr-k8s-certs-secret" not found Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.272947 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c58c9579-76cf-457e-a5da-ba83edbf0960-frr-startup\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.290553 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4wms\" (UniqueName: \"kubernetes.io/projected/c58c9579-76cf-457e-a5da-ba83edbf0960-kube-api-access-l4wms\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.295357 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sktdq\" (UniqueName: \"kubernetes.io/projected/2883eb8b-d6db-4ede-bf40-cb8aee643105-kube-api-access-sktdq\") pod \"frr-k8s-webhook-server-7784b6fcf-ztjlw\" (UID: \"2883eb8b-d6db-4ede-bf40-cb8aee643105\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.373606 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72zqx\" (UniqueName: \"kubernetes.io/projected/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-kube-api-access-72zqx\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.373684 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ebc1074-93c2-408f-bad5-0392529562c7-metrics-certs\") pod \"controller-5bddd4b946-tz4v5\" (UID: \"1ebc1074-93c2-408f-bad5-0392529562c7\") " pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.373706 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-memberlist\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.373735 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ebc1074-93c2-408f-bad5-0392529562c7-cert\") pod \"controller-5bddd4b946-tz4v5\" (UID: \"1ebc1074-93c2-408f-bad5-0392529562c7\") " pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.373753 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-metrics-certs\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.373795 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbhh4\" (UniqueName: \"kubernetes.io/projected/1ebc1074-93c2-408f-bad5-0392529562c7-kube-api-access-sbhh4\") pod \"controller-5bddd4b946-tz4v5\" (UID: \"1ebc1074-93c2-408f-bad5-0392529562c7\") " pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.373815 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-metallb-excludel2\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.374452 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-metallb-excludel2\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: E0103 03:27:59.374776 4746 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 03 03:27:59 crc kubenswrapper[4746]: E0103 03:27:59.374815 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ebc1074-93c2-408f-bad5-0392529562c7-metrics-certs podName:1ebc1074-93c2-408f-bad5-0392529562c7 nodeName:}" failed. No retries permitted until 2026-01-03 03:27:59.874803268 +0000 UTC m=+799.724693573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1ebc1074-93c2-408f-bad5-0392529562c7-metrics-certs") pod "controller-5bddd4b946-tz4v5" (UID: "1ebc1074-93c2-408f-bad5-0392529562c7") : secret "controller-certs-secret" not found Jan 03 03:27:59 crc kubenswrapper[4746]: E0103 03:27:59.374938 4746 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 03 03:27:59 crc kubenswrapper[4746]: E0103 03:27:59.374966 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-memberlist podName:460c2eb9-1e8c-499c-871b-a4bcf6fe99a1 nodeName:}" failed. No retries permitted until 2026-01-03 03:27:59.874958231 +0000 UTC m=+799.724848536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-memberlist") pod "speaker-gsdzz" (UID: "460c2eb9-1e8c-499c-871b-a4bcf6fe99a1") : secret "metallb-memberlist" not found Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.377350 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.379050 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-metrics-certs\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.390364 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ebc1074-93c2-408f-bad5-0392529562c7-cert\") pod \"controller-5bddd4b946-tz4v5\" (UID: \"1ebc1074-93c2-408f-bad5-0392529562c7\") " pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.406250 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72zqx\" (UniqueName: \"kubernetes.io/projected/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-kube-api-access-72zqx\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.407452 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbhh4\" (UniqueName: \"kubernetes.io/projected/1ebc1074-93c2-408f-bad5-0392529562c7-kube-api-access-sbhh4\") pod \"controller-5bddd4b946-tz4v5\" (UID: \"1ebc1074-93c2-408f-bad5-0392529562c7\") " pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.778896 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c58c9579-76cf-457e-a5da-ba83edbf0960-metrics-certs\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.778972 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2883eb8b-d6db-4ede-bf40-cb8aee643105-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-ztjlw\" (UID: \"2883eb8b-d6db-4ede-bf40-cb8aee643105\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.784310 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c58c9579-76cf-457e-a5da-ba83edbf0960-metrics-certs\") pod \"frr-k8s-clkjf\" (UID: \"c58c9579-76cf-457e-a5da-ba83edbf0960\") " pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.784564 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2883eb8b-d6db-4ede-bf40-cb8aee643105-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-ztjlw\" (UID: \"2883eb8b-d6db-4ede-bf40-cb8aee643105\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.880519 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ebc1074-93c2-408f-bad5-0392529562c7-metrics-certs\") pod \"controller-5bddd4b946-tz4v5\" (UID: \"1ebc1074-93c2-408f-bad5-0392529562c7\") " pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.880566 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-memberlist\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:27:59 crc kubenswrapper[4746]: E0103 03:27:59.880677 4746 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 03 03:27:59 crc kubenswrapper[4746]: E0103 03:27:59.880720 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-memberlist podName:460c2eb9-1e8c-499c-871b-a4bcf6fe99a1 nodeName:}" failed. No retries permitted until 2026-01-03 03:28:00.880707055 +0000 UTC m=+800.730597360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-memberlist") pod "speaker-gsdzz" (UID: "460c2eb9-1e8c-499c-871b-a4bcf6fe99a1") : secret "metallb-memberlist" not found Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.884266 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ebc1074-93c2-408f-bad5-0392529562c7-metrics-certs\") pod \"controller-5bddd4b946-tz4v5\" (UID: \"1ebc1074-93c2-408f-bad5-0392529562c7\") " pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.975753 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-clkjf" Jan 03 03:27:59 crc kubenswrapper[4746]: I0103 03:27:59.984962 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" Jan 03 03:28:00 crc kubenswrapper[4746]: I0103 03:28:00.071432 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:28:00 crc kubenswrapper[4746]: I0103 03:28:00.300125 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw"] Jan 03 03:28:00 crc kubenswrapper[4746]: W0103 03:28:00.309450 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2883eb8b_d6db_4ede_bf40_cb8aee643105.slice/crio-5b7fc9b0962394dacbde9651813c05bbdc3c4cf36e213fc62816ec291038da8e WatchSource:0}: Error finding container 5b7fc9b0962394dacbde9651813c05bbdc3c4cf36e213fc62816ec291038da8e: Status 404 returned error can't find the container with id 5b7fc9b0962394dacbde9651813c05bbdc3c4cf36e213fc62816ec291038da8e Jan 03 03:28:00 crc kubenswrapper[4746]: I0103 03:28:00.365361 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-tz4v5"] Jan 03 03:28:00 crc kubenswrapper[4746]: W0103 03:28:00.373478 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ebc1074_93c2_408f_bad5_0392529562c7.slice/crio-3a299653d4c3a3e92ad7e2930d9914a74b993738e647654f9bb04b797a392032 WatchSource:0}: Error finding container 3a299653d4c3a3e92ad7e2930d9914a74b993738e647654f9bb04b797a392032: Status 404 returned error can't find the container with id 3a299653d4c3a3e92ad7e2930d9914a74b993738e647654f9bb04b797a392032 Jan 03 03:28:00 crc kubenswrapper[4746]: I0103 03:28:00.447077 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rk8lf"] Jan 03 03:28:00 crc kubenswrapper[4746]: I0103 03:28:00.629633 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-tz4v5" event={"ID":"1ebc1074-93c2-408f-bad5-0392529562c7","Type":"ContainerStarted","Data":"f5f68cbf57dde7586d60af779e318e466ba81e77f3c800b119b07ec86dc7cb7d"} Jan 03 03:28:00 crc kubenswrapper[4746]: I0103 03:28:00.629791 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-tz4v5" event={"ID":"1ebc1074-93c2-408f-bad5-0392529562c7","Type":"ContainerStarted","Data":"3a299653d4c3a3e92ad7e2930d9914a74b993738e647654f9bb04b797a392032"} Jan 03 03:28:00 crc kubenswrapper[4746]: I0103 03:28:00.630592 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-clkjf" event={"ID":"c58c9579-76cf-457e-a5da-ba83edbf0960","Type":"ContainerStarted","Data":"d7b1f11b723e81a2d8cd8a0790eb7c24cf20f818d2e426169422352c5db66562"} Jan 03 03:28:00 crc kubenswrapper[4746]: I0103 03:28:00.632019 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rk8lf" podUID="c21b1307-1774-4354-aa02-7bf6da6f0d94" containerName="registry-server" containerID="cri-o://284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70" gracePeriod=2 Jan 03 03:28:00 crc kubenswrapper[4746]: I0103 03:28:00.632352 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" event={"ID":"2883eb8b-d6db-4ede-bf40-cb8aee643105","Type":"ContainerStarted","Data":"5b7fc9b0962394dacbde9651813c05bbdc3c4cf36e213fc62816ec291038da8e"} Jan 03 03:28:00 crc kubenswrapper[4746]: I0103 03:28:00.894136 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-memberlist\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:28:00 crc kubenswrapper[4746]: I0103 03:28:00.899560 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/460c2eb9-1e8c-499c-871b-a4bcf6fe99a1-memberlist\") pod \"speaker-gsdzz\" (UID: \"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1\") " pod="metallb-system/speaker-gsdzz" Jan 03 03:28:00 crc kubenswrapper[4746]: I0103 03:28:00.953857 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gsdzz" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.554834 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.648112 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gsdzz" event={"ID":"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1","Type":"ContainerStarted","Data":"e032b4c450e3e16e2a27120a3f682910a203f69c061760adffa639bd11f79bc0"} Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.648167 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gsdzz" event={"ID":"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1","Type":"ContainerStarted","Data":"0295ea40b3e56b8c653c2a679fbb21b86b15c650dfef57d0b9fdb4b59c8f0b49"} Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.653444 4746 generic.go:334] "Generic (PLEG): container finished" podID="c21b1307-1774-4354-aa02-7bf6da6f0d94" containerID="284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70" exitCode=0 Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.653480 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rk8lf" event={"ID":"c21b1307-1774-4354-aa02-7bf6da6f0d94","Type":"ContainerDied","Data":"284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70"} Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.653504 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rk8lf" event={"ID":"c21b1307-1774-4354-aa02-7bf6da6f0d94","Type":"ContainerDied","Data":"f6ac7b8f0c33e537432934900339499800b33655925aadc4b2d7a55f968dac6f"} Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.653519 4746 scope.go:117] "RemoveContainer" containerID="284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.653615 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rk8lf" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.678464 4746 scope.go:117] "RemoveContainer" containerID="c7fee3d294e3c17ac19568726a0b5fdc14f74d72e3619bdef99d03851119a17d" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.703578 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjntk\" (UniqueName: \"kubernetes.io/projected/c21b1307-1774-4354-aa02-7bf6da6f0d94-kube-api-access-bjntk\") pod \"c21b1307-1774-4354-aa02-7bf6da6f0d94\" (UID: \"c21b1307-1774-4354-aa02-7bf6da6f0d94\") " Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.703681 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21b1307-1774-4354-aa02-7bf6da6f0d94-utilities\") pod \"c21b1307-1774-4354-aa02-7bf6da6f0d94\" (UID: \"c21b1307-1774-4354-aa02-7bf6da6f0d94\") " Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.703718 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21b1307-1774-4354-aa02-7bf6da6f0d94-catalog-content\") pod \"c21b1307-1774-4354-aa02-7bf6da6f0d94\" (UID: \"c21b1307-1774-4354-aa02-7bf6da6f0d94\") " Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.709546 4746 scope.go:117] "RemoveContainer" containerID="f8f25e77deb38e8a8d93ef04be8e1bdc6688fc74179d91306c4f368594650585" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.710275 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21b1307-1774-4354-aa02-7bf6da6f0d94-kube-api-access-bjntk" (OuterVolumeSpecName: "kube-api-access-bjntk") pod "c21b1307-1774-4354-aa02-7bf6da6f0d94" (UID: "c21b1307-1774-4354-aa02-7bf6da6f0d94"). InnerVolumeSpecName "kube-api-access-bjntk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.710921 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21b1307-1774-4354-aa02-7bf6da6f0d94-utilities" (OuterVolumeSpecName: "utilities") pod "c21b1307-1774-4354-aa02-7bf6da6f0d94" (UID: "c21b1307-1774-4354-aa02-7bf6da6f0d94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.736627 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21b1307-1774-4354-aa02-7bf6da6f0d94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c21b1307-1774-4354-aa02-7bf6da6f0d94" (UID: "c21b1307-1774-4354-aa02-7bf6da6f0d94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.755445 4746 scope.go:117] "RemoveContainer" containerID="284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70" Jan 03 03:28:01 crc kubenswrapper[4746]: E0103 03:28:01.755978 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70\": container with ID starting with 284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70 not found: ID does not exist" containerID="284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.756022 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70"} err="failed to get container status \"284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70\": rpc error: code = NotFound desc = could not find container \"284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70\": container with ID starting with 284e281940c92ebe277bc9dad069e1cbce82417ab83f71e7dd4cee120f636d70 not found: ID does not exist" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.756047 4746 scope.go:117] "RemoveContainer" containerID="c7fee3d294e3c17ac19568726a0b5fdc14f74d72e3619bdef99d03851119a17d" Jan 03 03:28:01 crc kubenswrapper[4746]: E0103 03:28:01.756551 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7fee3d294e3c17ac19568726a0b5fdc14f74d72e3619bdef99d03851119a17d\": container with ID starting with c7fee3d294e3c17ac19568726a0b5fdc14f74d72e3619bdef99d03851119a17d not found: ID does not exist" containerID="c7fee3d294e3c17ac19568726a0b5fdc14f74d72e3619bdef99d03851119a17d" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.756594 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fee3d294e3c17ac19568726a0b5fdc14f74d72e3619bdef99d03851119a17d"} err="failed to get container status \"c7fee3d294e3c17ac19568726a0b5fdc14f74d72e3619bdef99d03851119a17d\": rpc error: code = NotFound desc = could not find container \"c7fee3d294e3c17ac19568726a0b5fdc14f74d72e3619bdef99d03851119a17d\": container with ID starting with c7fee3d294e3c17ac19568726a0b5fdc14f74d72e3619bdef99d03851119a17d not found: ID does not exist" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.756624 4746 scope.go:117] "RemoveContainer" containerID="f8f25e77deb38e8a8d93ef04be8e1bdc6688fc74179d91306c4f368594650585" Jan 03 03:28:01 crc kubenswrapper[4746]: E0103 03:28:01.756952 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8f25e77deb38e8a8d93ef04be8e1bdc6688fc74179d91306c4f368594650585\": container with ID starting with f8f25e77deb38e8a8d93ef04be8e1bdc6688fc74179d91306c4f368594650585 not found: ID does not exist" containerID="f8f25e77deb38e8a8d93ef04be8e1bdc6688fc74179d91306c4f368594650585" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.756976 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f25e77deb38e8a8d93ef04be8e1bdc6688fc74179d91306c4f368594650585"} err="failed to get container status \"f8f25e77deb38e8a8d93ef04be8e1bdc6688fc74179d91306c4f368594650585\": rpc error: code = NotFound desc = could not find container \"f8f25e77deb38e8a8d93ef04be8e1bdc6688fc74179d91306c4f368594650585\": container with ID starting with f8f25e77deb38e8a8d93ef04be8e1bdc6688fc74179d91306c4f368594650585 not found: ID does not exist" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.805215 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjntk\" (UniqueName: \"kubernetes.io/projected/c21b1307-1774-4354-aa02-7bf6da6f0d94-kube-api-access-bjntk\") on node \"crc\" DevicePath \"\"" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.805505 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21b1307-1774-4354-aa02-7bf6da6f0d94-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:28:01 crc kubenswrapper[4746]: I0103 03:28:01.805515 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21b1307-1774-4354-aa02-7bf6da6f0d94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:28:02 crc kubenswrapper[4746]: I0103 03:28:01.998364 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rk8lf"] Jan 03 03:28:02 crc kubenswrapper[4746]: I0103 03:28:02.009138 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rk8lf"] Jan 03 03:28:02 crc kubenswrapper[4746]: I0103 03:28:02.473344 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21b1307-1774-4354-aa02-7bf6da6f0d94" path="/var/lib/kubelet/pods/c21b1307-1774-4354-aa02-7bf6da6f0d94/volumes" Jan 03 03:28:04 crc kubenswrapper[4746]: I0103 03:28:04.352013 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:28:04 crc kubenswrapper[4746]: I0103 03:28:04.352568 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:28:04 crc kubenswrapper[4746]: I0103 03:28:04.396600 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:28:04 crc kubenswrapper[4746]: I0103 03:28:04.684937 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gsdzz" event={"ID":"460c2eb9-1e8c-499c-871b-a4bcf6fe99a1","Type":"ContainerStarted","Data":"a3afd74ff1d5803ea441452bbb5f884be8c303a91a40def700786f8865bdf6e7"} Jan 03 03:28:04 crc kubenswrapper[4746]: I0103 03:28:04.685073 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gsdzz" Jan 03 03:28:04 crc kubenswrapper[4746]: I0103 03:28:04.689068 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-tz4v5" event={"ID":"1ebc1074-93c2-408f-bad5-0392529562c7","Type":"ContainerStarted","Data":"7b03200342beeb90c85bd6f9b63e4fe837abbe3a3c3588a5f03967d1370271d8"} Jan 03 03:28:04 crc kubenswrapper[4746]: I0103 03:28:04.699889 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gsdzz" podStartSLOduration=2.467619077 podStartE2EDuration="5.699868227s" podCreationTimestamp="2026-01-03 03:27:59 +0000 UTC" firstStartedPulling="2026-01-03 03:28:01.200884117 +0000 UTC m=+801.050774422" lastFinishedPulling="2026-01-03 03:28:04.433133267 +0000 UTC m=+804.283023572" observedRunningTime="2026-01-03 03:28:04.699732894 +0000 UTC m=+804.549623199" watchObservedRunningTime="2026-01-03 03:28:04.699868227 +0000 UTC m=+804.549758532" Jan 03 03:28:04 crc kubenswrapper[4746]: I0103 03:28:04.721085 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-tz4v5" podStartSLOduration=1.8316973170000002 podStartE2EDuration="5.721065015s" podCreationTimestamp="2026-01-03 03:27:59 +0000 UTC" firstStartedPulling="2026-01-03 03:28:00.52617661 +0000 UTC m=+800.376066925" lastFinishedPulling="2026-01-03 03:28:04.415544318 +0000 UTC m=+804.265434623" observedRunningTime="2026-01-03 03:28:04.718035271 +0000 UTC m=+804.567925576" watchObservedRunningTime="2026-01-03 03:28:04.721065015 +0000 UTC m=+804.570955320" Jan 03 03:28:04 crc kubenswrapper[4746]: I0103 03:28:04.749195 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:28:05 crc kubenswrapper[4746]: I0103 03:28:05.706555 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:28:05 crc kubenswrapper[4746]: I0103 03:28:05.815152 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mq5jq"] Jan 03 03:28:07 crc kubenswrapper[4746]: I0103 03:28:07.716753 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mq5jq" podUID="89806220-a6e9-4ad8-b661-1d02bb829caa" containerName="registry-server" containerID="cri-o://5fa6937333d9e755ef14c681237c01d4b196e5408f39ebf58de73429a5b4e82a" gracePeriod=2 Jan 03 03:28:08 crc kubenswrapper[4746]: I0103 03:28:08.730189 4746 generic.go:334] "Generic (PLEG): container finished" podID="89806220-a6e9-4ad8-b661-1d02bb829caa" containerID="5fa6937333d9e755ef14c681237c01d4b196e5408f39ebf58de73429a5b4e82a" exitCode=0 Jan 03 03:28:08 crc kubenswrapper[4746]: I0103 03:28:08.730388 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq5jq" event={"ID":"89806220-a6e9-4ad8-b661-1d02bb829caa","Type":"ContainerDied","Data":"5fa6937333d9e755ef14c681237c01d4b196e5408f39ebf58de73429a5b4e82a"} Jan 03 03:28:08 crc kubenswrapper[4746]: I0103 03:28:08.802065 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:28:08 crc kubenswrapper[4746]: I0103 03:28:08.922405 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xckpw\" (UniqueName: \"kubernetes.io/projected/89806220-a6e9-4ad8-b661-1d02bb829caa-kube-api-access-xckpw\") pod \"89806220-a6e9-4ad8-b661-1d02bb829caa\" (UID: \"89806220-a6e9-4ad8-b661-1d02bb829caa\") " Jan 03 03:28:08 crc kubenswrapper[4746]: I0103 03:28:08.922484 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89806220-a6e9-4ad8-b661-1d02bb829caa-utilities\") pod \"89806220-a6e9-4ad8-b661-1d02bb829caa\" (UID: \"89806220-a6e9-4ad8-b661-1d02bb829caa\") " Jan 03 03:28:08 crc kubenswrapper[4746]: I0103 03:28:08.922541 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89806220-a6e9-4ad8-b661-1d02bb829caa-catalog-content\") pod \"89806220-a6e9-4ad8-b661-1d02bb829caa\" (UID: \"89806220-a6e9-4ad8-b661-1d02bb829caa\") " Jan 03 03:28:08 crc kubenswrapper[4746]: I0103 03:28:08.923700 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89806220-a6e9-4ad8-b661-1d02bb829caa-utilities" (OuterVolumeSpecName: "utilities") pod "89806220-a6e9-4ad8-b661-1d02bb829caa" (UID: "89806220-a6e9-4ad8-b661-1d02bb829caa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:28:08 crc kubenswrapper[4746]: I0103 03:28:08.929804 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89806220-a6e9-4ad8-b661-1d02bb829caa-kube-api-access-xckpw" (OuterVolumeSpecName: "kube-api-access-xckpw") pod "89806220-a6e9-4ad8-b661-1d02bb829caa" (UID: "89806220-a6e9-4ad8-b661-1d02bb829caa"). InnerVolumeSpecName "kube-api-access-xckpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.009197 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89806220-a6e9-4ad8-b661-1d02bb829caa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89806220-a6e9-4ad8-b661-1d02bb829caa" (UID: "89806220-a6e9-4ad8-b661-1d02bb829caa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.023805 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xckpw\" (UniqueName: \"kubernetes.io/projected/89806220-a6e9-4ad8-b661-1d02bb829caa-kube-api-access-xckpw\") on node \"crc\" DevicePath \"\"" Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.023851 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89806220-a6e9-4ad8-b661-1d02bb829caa-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.023867 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89806220-a6e9-4ad8-b661-1d02bb829caa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.739367 4746 generic.go:334] "Generic (PLEG): container finished" podID="c58c9579-76cf-457e-a5da-ba83edbf0960" containerID="4af03e2e3afb0fb32c9463d68e6751d5f68562169dc3c9195018a8582a509f94" exitCode=0 Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.739436 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-clkjf" event={"ID":"c58c9579-76cf-457e-a5da-ba83edbf0960","Type":"ContainerDied","Data":"4af03e2e3afb0fb32c9463d68e6751d5f68562169dc3c9195018a8582a509f94"} Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.741443 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" event={"ID":"2883eb8b-d6db-4ede-bf40-cb8aee643105","Type":"ContainerStarted","Data":"44cd6e7db75e8b998af623e54c9606895c47d12bade6b45c82477dde8b66bcbc"} Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.741617 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.744107 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq5jq" event={"ID":"89806220-a6e9-4ad8-b661-1d02bb829caa","Type":"ContainerDied","Data":"7e482f42132d2d05e0a16e69b9c233ef4e1888fdfd0832f88584caa847eeb058"} Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.744169 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mq5jq" Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.744289 4746 scope.go:117] "RemoveContainer" containerID="5fa6937333d9e755ef14c681237c01d4b196e5408f39ebf58de73429a5b4e82a" Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.762921 4746 scope.go:117] "RemoveContainer" containerID="d7d9414a31c74bc84c9717d087dacdeea71350b50dabe21de40199b3ca0bb80b" Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.782090 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mq5jq"] Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.797026 4746 scope.go:117] "RemoveContainer" containerID="32892c42a73b55f46f08644d26c7180b58a1f870ca3294185b0df333d99bc782" Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.799090 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mq5jq"] Jan 03 03:28:09 crc kubenswrapper[4746]: I0103 03:28:09.809144 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" podStartSLOduration=2.5258726080000002 podStartE2EDuration="10.809117276s" podCreationTimestamp="2026-01-03 03:27:59 +0000 UTC" firstStartedPulling="2026-01-03 03:28:00.31970879 +0000 UTC m=+800.169599095" lastFinishedPulling="2026-01-03 03:28:08.602953468 +0000 UTC m=+808.452843763" observedRunningTime="2026-01-03 03:28:09.808916022 +0000 UTC m=+809.658806327" watchObservedRunningTime="2026-01-03 03:28:09.809117276 +0000 UTC m=+809.659007581" Jan 03 03:28:10 crc kubenswrapper[4746]: I0103 03:28:10.076967 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-tz4v5" Jan 03 03:28:10 crc kubenswrapper[4746]: I0103 03:28:10.475166 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89806220-a6e9-4ad8-b661-1d02bb829caa" path="/var/lib/kubelet/pods/89806220-a6e9-4ad8-b661-1d02bb829caa/volumes" Jan 03 03:28:10 crc kubenswrapper[4746]: I0103 03:28:10.751628 4746 generic.go:334] "Generic (PLEG): container finished" podID="c58c9579-76cf-457e-a5da-ba83edbf0960" containerID="2a94d38593f37b3bbe8e58df82860b69055cf8d9eb65ef720be97511c06a9fdb" exitCode=0 Jan 03 03:28:10 crc kubenswrapper[4746]: I0103 03:28:10.751705 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-clkjf" event={"ID":"c58c9579-76cf-457e-a5da-ba83edbf0960","Type":"ContainerDied","Data":"2a94d38593f37b3bbe8e58df82860b69055cf8d9eb65ef720be97511c06a9fdb"} Jan 03 03:28:11 crc kubenswrapper[4746]: I0103 03:28:11.759204 4746 generic.go:334] "Generic (PLEG): container finished" podID="c58c9579-76cf-457e-a5da-ba83edbf0960" containerID="1d42a9bbd64516ec455d59e0b530231620ba32386be009eaaebf34921f6270ed" exitCode=0 Jan 03 03:28:11 crc kubenswrapper[4746]: I0103 03:28:11.759310 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-clkjf" event={"ID":"c58c9579-76cf-457e-a5da-ba83edbf0960","Type":"ContainerDied","Data":"1d42a9bbd64516ec455d59e0b530231620ba32386be009eaaebf34921f6270ed"} Jan 03 03:28:12 crc kubenswrapper[4746]: I0103 03:28:12.771039 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-clkjf" event={"ID":"c58c9579-76cf-457e-a5da-ba83edbf0960","Type":"ContainerStarted","Data":"2d8f450d8ee85ce567878edccfc3beb339dc05949a45489e74101950fe93abaa"} Jan 03 03:28:12 crc kubenswrapper[4746]: I0103 03:28:12.771326 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-clkjf" event={"ID":"c58c9579-76cf-457e-a5da-ba83edbf0960","Type":"ContainerStarted","Data":"fde316fb21fd2738dea3d29199080bf2b1ab502d3a9d9aa350c2ccc39d9f700e"} Jan 03 03:28:12 crc kubenswrapper[4746]: I0103 03:28:12.771342 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-clkjf" event={"ID":"c58c9579-76cf-457e-a5da-ba83edbf0960","Type":"ContainerStarted","Data":"0acaf21378bcbbd86523bc575ee761caabfe818a1ca68e4dc8f15618031cddce"} Jan 03 03:28:12 crc kubenswrapper[4746]: I0103 03:28:12.771357 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-clkjf" event={"ID":"c58c9579-76cf-457e-a5da-ba83edbf0960","Type":"ContainerStarted","Data":"f5fc7443152e9b1b42fcf7471fc504a04ebc271f782baac6e103a558fff11c25"} Jan 03 03:28:12 crc kubenswrapper[4746]: I0103 03:28:12.771369 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-clkjf" event={"ID":"c58c9579-76cf-457e-a5da-ba83edbf0960","Type":"ContainerStarted","Data":"37d241d2b7d4f93900a7393677b239b8067dd5ef54c1ebdb36db6d47e51718ff"} Jan 03 03:28:12 crc kubenswrapper[4746]: I0103 03:28:12.771380 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-clkjf" event={"ID":"c58c9579-76cf-457e-a5da-ba83edbf0960","Type":"ContainerStarted","Data":"c0e9eda1c901e1b9b77643bf6669bdf378c92d2c4b813fac38c30354da6f5a45"} Jan 03 03:28:12 crc kubenswrapper[4746]: I0103 03:28:12.771399 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-clkjf" Jan 03 03:28:12 crc kubenswrapper[4746]: I0103 03:28:12.809239 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-clkjf" podStartSLOduration=5.310751478 podStartE2EDuration="13.80922097s" podCreationTimestamp="2026-01-03 03:27:59 +0000 UTC" firstStartedPulling="2026-01-03 03:28:00.096755058 +0000 UTC m=+799.946645373" lastFinishedPulling="2026-01-03 03:28:08.59522455 +0000 UTC m=+808.445114865" observedRunningTime="2026-01-03 03:28:12.803742616 +0000 UTC m=+812.653632931" watchObservedRunningTime="2026-01-03 03:28:12.80922097 +0000 UTC m=+812.659111285" Jan 03 03:28:14 crc kubenswrapper[4746]: I0103 03:28:14.976762 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-clkjf" Jan 03 03:28:15 crc kubenswrapper[4746]: I0103 03:28:15.032441 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-clkjf" Jan 03 03:28:19 crc kubenswrapper[4746]: I0103 03:28:19.991903 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-ztjlw" Jan 03 03:28:20 crc kubenswrapper[4746]: I0103 03:28:20.959061 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gsdzz" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.929818 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-wxsbk"] Jan 03 03:28:26 crc kubenswrapper[4746]: E0103 03:28:26.930511 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89806220-a6e9-4ad8-b661-1d02bb829caa" containerName="extract-content" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.930526 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="89806220-a6e9-4ad8-b661-1d02bb829caa" containerName="extract-content" Jan 03 03:28:26 crc kubenswrapper[4746]: E0103 03:28:26.930534 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21b1307-1774-4354-aa02-7bf6da6f0d94" containerName="extract-content" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.930540 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21b1307-1774-4354-aa02-7bf6da6f0d94" containerName="extract-content" Jan 03 03:28:26 crc kubenswrapper[4746]: E0103 03:28:26.930552 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21b1307-1774-4354-aa02-7bf6da6f0d94" containerName="extract-utilities" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.930560 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21b1307-1774-4354-aa02-7bf6da6f0d94" containerName="extract-utilities" Jan 03 03:28:26 crc kubenswrapper[4746]: E0103 03:28:26.930568 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89806220-a6e9-4ad8-b661-1d02bb829caa" containerName="extract-utilities" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.930574 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="89806220-a6e9-4ad8-b661-1d02bb829caa" containerName="extract-utilities" Jan 03 03:28:26 crc kubenswrapper[4746]: E0103 03:28:26.930585 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21b1307-1774-4354-aa02-7bf6da6f0d94" containerName="registry-server" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.930591 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21b1307-1774-4354-aa02-7bf6da6f0d94" containerName="registry-server" Jan 03 03:28:26 crc kubenswrapper[4746]: E0103 03:28:26.930605 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89806220-a6e9-4ad8-b661-1d02bb829caa" containerName="registry-server" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.930611 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="89806220-a6e9-4ad8-b661-1d02bb829caa" containerName="registry-server" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.930757 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="89806220-a6e9-4ad8-b661-1d02bb829caa" containerName="registry-server" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.930770 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21b1307-1774-4354-aa02-7bf6da6f0d94" containerName="registry-server" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.931163 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-wxsbk" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.935370 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.935437 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-89xrj" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.935467 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.947046 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-wxsbk"] Jan 03 03:28:26 crc kubenswrapper[4746]: I0103 03:28:26.979413 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n6kv\" (UniqueName: \"kubernetes.io/projected/53ba4b1f-af6a-41d3-86fd-5494d89e911a-kube-api-access-6n6kv\") pod \"mariadb-operator-index-wxsbk\" (UID: \"53ba4b1f-af6a-41d3-86fd-5494d89e911a\") " pod="openstack-operators/mariadb-operator-index-wxsbk" Jan 03 03:28:27 crc kubenswrapper[4746]: I0103 03:28:27.080854 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n6kv\" (UniqueName: \"kubernetes.io/projected/53ba4b1f-af6a-41d3-86fd-5494d89e911a-kube-api-access-6n6kv\") pod \"mariadb-operator-index-wxsbk\" (UID: \"53ba4b1f-af6a-41d3-86fd-5494d89e911a\") " pod="openstack-operators/mariadb-operator-index-wxsbk" Jan 03 03:28:27 crc kubenswrapper[4746]: I0103 03:28:27.105031 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n6kv\" (UniqueName: \"kubernetes.io/projected/53ba4b1f-af6a-41d3-86fd-5494d89e911a-kube-api-access-6n6kv\") pod \"mariadb-operator-index-wxsbk\" (UID: \"53ba4b1f-af6a-41d3-86fd-5494d89e911a\") " pod="openstack-operators/mariadb-operator-index-wxsbk" Jan 03 03:28:27 crc kubenswrapper[4746]: I0103 03:28:27.258998 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-wxsbk" Jan 03 03:28:27 crc kubenswrapper[4746]: I0103 03:28:27.756507 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-wxsbk"] Jan 03 03:28:27 crc kubenswrapper[4746]: W0103 03:28:27.765415 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53ba4b1f_af6a_41d3_86fd_5494d89e911a.slice/crio-3ce873631e06ef1258c2c2c8cc3499ecae30b61801b8cbc160a4373655dfaff7 WatchSource:0}: Error finding container 3ce873631e06ef1258c2c2c8cc3499ecae30b61801b8cbc160a4373655dfaff7: Status 404 returned error can't find the container with id 3ce873631e06ef1258c2c2c8cc3499ecae30b61801b8cbc160a4373655dfaff7 Jan 03 03:28:27 crc kubenswrapper[4746]: I0103 03:28:27.873460 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-wxsbk" event={"ID":"53ba4b1f-af6a-41d3-86fd-5494d89e911a","Type":"ContainerStarted","Data":"3ce873631e06ef1258c2c2c8cc3499ecae30b61801b8cbc160a4373655dfaff7"} Jan 03 03:28:29 crc kubenswrapper[4746]: I0103 03:28:29.888449 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-wxsbk" event={"ID":"53ba4b1f-af6a-41d3-86fd-5494d89e911a","Type":"ContainerStarted","Data":"eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45"} Jan 03 03:28:29 crc kubenswrapper[4746]: I0103 03:28:29.910458 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-wxsbk" podStartSLOduration=2.166653004 podStartE2EDuration="3.910426805s" podCreationTimestamp="2026-01-03 03:28:26 +0000 UTC" firstStartedPulling="2026-01-03 03:28:27.767167844 +0000 UTC m=+827.617058169" lastFinishedPulling="2026-01-03 03:28:29.510941645 +0000 UTC m=+829.360831970" observedRunningTime="2026-01-03 03:28:29.905471924 +0000 UTC m=+829.755362269" watchObservedRunningTime="2026-01-03 03:28:29.910426805 +0000 UTC m=+829.760317150" Jan 03 03:28:29 crc kubenswrapper[4746]: I0103 03:28:29.982040 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-clkjf" Jan 03 03:28:30 crc kubenswrapper[4746]: I0103 03:28:30.110070 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-wxsbk"] Jan 03 03:28:30 crc kubenswrapper[4746]: I0103 03:28:30.720091 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-k7f5z"] Jan 03 03:28:30 crc kubenswrapper[4746]: I0103 03:28:30.753508 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-k7f5z"] Jan 03 03:28:30 crc kubenswrapper[4746]: I0103 03:28:30.753754 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-k7f5z" Jan 03 03:28:30 crc kubenswrapper[4746]: I0103 03:28:30.842007 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s85rm\" (UniqueName: \"kubernetes.io/projected/0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5-kube-api-access-s85rm\") pod \"mariadb-operator-index-k7f5z\" (UID: \"0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5\") " pod="openstack-operators/mariadb-operator-index-k7f5z" Jan 03 03:28:30 crc kubenswrapper[4746]: I0103 03:28:30.943848 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s85rm\" (UniqueName: \"kubernetes.io/projected/0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5-kube-api-access-s85rm\") pod \"mariadb-operator-index-k7f5z\" (UID: \"0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5\") " pod="openstack-operators/mariadb-operator-index-k7f5z" Jan 03 03:28:30 crc kubenswrapper[4746]: I0103 03:28:30.969458 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s85rm\" (UniqueName: \"kubernetes.io/projected/0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5-kube-api-access-s85rm\") pod \"mariadb-operator-index-k7f5z\" (UID: \"0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5\") " pod="openstack-operators/mariadb-operator-index-k7f5z" Jan 03 03:28:31 crc kubenswrapper[4746]: I0103 03:28:31.078996 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-k7f5z" Jan 03 03:28:31 crc kubenswrapper[4746]: I0103 03:28:31.326934 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-k7f5z"] Jan 03 03:28:31 crc kubenswrapper[4746]: I0103 03:28:31.916418 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-wxsbk" podUID="53ba4b1f-af6a-41d3-86fd-5494d89e911a" containerName="registry-server" containerID="cri-o://eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45" gracePeriod=2 Jan 03 03:28:31 crc kubenswrapper[4746]: I0103 03:28:31.916910 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-k7f5z" event={"ID":"0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5","Type":"ContainerStarted","Data":"e0d59f86a8836cbb86e04f908bdd9171f01bf72de21dfb8f75b5d7dae1db99e9"} Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.350020 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-wxsbk" Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.464109 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n6kv\" (UniqueName: \"kubernetes.io/projected/53ba4b1f-af6a-41d3-86fd-5494d89e911a-kube-api-access-6n6kv\") pod \"53ba4b1f-af6a-41d3-86fd-5494d89e911a\" (UID: \"53ba4b1f-af6a-41d3-86fd-5494d89e911a\") " Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.478341 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ba4b1f-af6a-41d3-86fd-5494d89e911a-kube-api-access-6n6kv" (OuterVolumeSpecName: "kube-api-access-6n6kv") pod "53ba4b1f-af6a-41d3-86fd-5494d89e911a" (UID: "53ba4b1f-af6a-41d3-86fd-5494d89e911a"). InnerVolumeSpecName "kube-api-access-6n6kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.565710 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n6kv\" (UniqueName: \"kubernetes.io/projected/53ba4b1f-af6a-41d3-86fd-5494d89e911a-kube-api-access-6n6kv\") on node \"crc\" DevicePath \"\"" Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.928920 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-k7f5z" event={"ID":"0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5","Type":"ContainerStarted","Data":"e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16"} Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.931629 4746 generic.go:334] "Generic (PLEG): container finished" podID="53ba4b1f-af6a-41d3-86fd-5494d89e911a" containerID="eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45" exitCode=0 Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.931681 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-wxsbk" Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.932390 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-wxsbk" event={"ID":"53ba4b1f-af6a-41d3-86fd-5494d89e911a","Type":"ContainerDied","Data":"eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45"} Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.932421 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-wxsbk" event={"ID":"53ba4b1f-af6a-41d3-86fd-5494d89e911a","Type":"ContainerDied","Data":"3ce873631e06ef1258c2c2c8cc3499ecae30b61801b8cbc160a4373655dfaff7"} Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.932438 4746 scope.go:117] "RemoveContainer" containerID="eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45" Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.949445 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-k7f5z" podStartSLOduration=2.481358524 podStartE2EDuration="2.949422448s" podCreationTimestamp="2026-01-03 03:28:30 +0000 UTC" firstStartedPulling="2026-01-03 03:28:31.340475438 +0000 UTC m=+831.190365753" lastFinishedPulling="2026-01-03 03:28:31.808539332 +0000 UTC m=+831.658429677" observedRunningTime="2026-01-03 03:28:32.941471704 +0000 UTC m=+832.791362009" watchObservedRunningTime="2026-01-03 03:28:32.949422448 +0000 UTC m=+832.799312753" Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.958492 4746 scope.go:117] "RemoveContainer" containerID="eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45" Jan 03 03:28:32 crc kubenswrapper[4746]: E0103 03:28:32.959387 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45\": container with ID starting with eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45 not found: ID does not exist" containerID="eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45" Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.959450 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45"} err="failed to get container status \"eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45\": rpc error: code = NotFound desc = could not find container \"eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45\": container with ID starting with eeeefdcac59c1295557798e17b12f5a13b601f618d4f61e87718c4f00af07b45 not found: ID does not exist" Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.982921 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-wxsbk"] Jan 03 03:28:32 crc kubenswrapper[4746]: I0103 03:28:32.987854 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-wxsbk"] Jan 03 03:28:34 crc kubenswrapper[4746]: I0103 03:28:34.477584 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ba4b1f-af6a-41d3-86fd-5494d89e911a" path="/var/lib/kubelet/pods/53ba4b1f-af6a-41d3-86fd-5494d89e911a/volumes" Jan 03 03:28:41 crc kubenswrapper[4746]: I0103 03:28:41.079597 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-k7f5z" Jan 03 03:28:41 crc kubenswrapper[4746]: I0103 03:28:41.081916 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-k7f5z" Jan 03 03:28:41 crc kubenswrapper[4746]: I0103 03:28:41.110566 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-k7f5z" Jan 03 03:28:42 crc kubenswrapper[4746]: I0103 03:28:42.047890 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-k7f5z" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.561431 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8"] Jan 03 03:28:46 crc kubenswrapper[4746]: E0103 03:28:46.561972 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ba4b1f-af6a-41d3-86fd-5494d89e911a" containerName="registry-server" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.561984 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ba4b1f-af6a-41d3-86fd-5494d89e911a" containerName="registry-server" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.562090 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ba4b1f-af6a-41d3-86fd-5494d89e911a" containerName="registry-server" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.562906 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.565823 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hpjh5" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.580819 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8"] Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.638737 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rqqc\" (UniqueName: \"kubernetes.io/projected/1c3e120a-2e84-4fb8-b93d-44eaebb61650-kube-api-access-2rqqc\") pod \"6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8\" (UID: \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\") " pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.639052 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c3e120a-2e84-4fb8-b93d-44eaebb61650-bundle\") pod \"6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8\" (UID: \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\") " pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.639208 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c3e120a-2e84-4fb8-b93d-44eaebb61650-util\") pod \"6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8\" (UID: \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\") " pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.740533 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c3e120a-2e84-4fb8-b93d-44eaebb61650-bundle\") pod \"6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8\" (UID: \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\") " pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.740606 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c3e120a-2e84-4fb8-b93d-44eaebb61650-util\") pod \"6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8\" (UID: \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\") " pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.740802 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rqqc\" (UniqueName: \"kubernetes.io/projected/1c3e120a-2e84-4fb8-b93d-44eaebb61650-kube-api-access-2rqqc\") pod \"6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8\" (UID: \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\") " pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.741647 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c3e120a-2e84-4fb8-b93d-44eaebb61650-bundle\") pod \"6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8\" (UID: \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\") " pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.741686 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c3e120a-2e84-4fb8-b93d-44eaebb61650-util\") pod \"6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8\" (UID: \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\") " pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.766641 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rqqc\" (UniqueName: \"kubernetes.io/projected/1c3e120a-2e84-4fb8-b93d-44eaebb61650-kube-api-access-2rqqc\") pod \"6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8\" (UID: \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\") " pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:28:46 crc kubenswrapper[4746]: I0103 03:28:46.883415 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:28:47 crc kubenswrapper[4746]: I0103 03:28:47.340796 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8"] Jan 03 03:28:47 crc kubenswrapper[4746]: W0103 03:28:47.349919 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c3e120a_2e84_4fb8_b93d_44eaebb61650.slice/crio-e7875166c35da6fb026ca2f079daa8b0498b0a72b3c1d32d54eb98e01634e7e7 WatchSource:0}: Error finding container e7875166c35da6fb026ca2f079daa8b0498b0a72b3c1d32d54eb98e01634e7e7: Status 404 returned error can't find the container with id e7875166c35da6fb026ca2f079daa8b0498b0a72b3c1d32d54eb98e01634e7e7 Jan 03 03:28:48 crc kubenswrapper[4746]: I0103 03:28:48.066910 4746 generic.go:334] "Generic (PLEG): container finished" podID="1c3e120a-2e84-4fb8-b93d-44eaebb61650" containerID="2824f19a5478b2a9d7a6572dff9945f8aa19467a047aebcbace327c04ff61568" exitCode=0 Jan 03 03:28:48 crc kubenswrapper[4746]: I0103 03:28:48.067174 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" event={"ID":"1c3e120a-2e84-4fb8-b93d-44eaebb61650","Type":"ContainerDied","Data":"2824f19a5478b2a9d7a6572dff9945f8aa19467a047aebcbace327c04ff61568"} Jan 03 03:28:48 crc kubenswrapper[4746]: I0103 03:28:48.067224 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" event={"ID":"1c3e120a-2e84-4fb8-b93d-44eaebb61650","Type":"ContainerStarted","Data":"e7875166c35da6fb026ca2f079daa8b0498b0a72b3c1d32d54eb98e01634e7e7"} Jan 03 03:28:50 crc kubenswrapper[4746]: I0103 03:28:50.089494 4746 generic.go:334] "Generic (PLEG): container finished" podID="1c3e120a-2e84-4fb8-b93d-44eaebb61650" containerID="07e405ee9a0a843825e01481469d9dd41597a3e2ea1195acce308fa806bd4870" exitCode=0 Jan 03 03:28:50 crc kubenswrapper[4746]: I0103 03:28:50.089583 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" event={"ID":"1c3e120a-2e84-4fb8-b93d-44eaebb61650","Type":"ContainerDied","Data":"07e405ee9a0a843825e01481469d9dd41597a3e2ea1195acce308fa806bd4870"} Jan 03 03:28:51 crc kubenswrapper[4746]: I0103 03:28:51.100933 4746 generic.go:334] "Generic (PLEG): container finished" podID="1c3e120a-2e84-4fb8-b93d-44eaebb61650" containerID="41729b43ba759c789f07d0d0fa28ab90f7879f0236a0ce5fa11925eeb9de8b78" exitCode=0 Jan 03 03:28:51 crc kubenswrapper[4746]: I0103 03:28:51.101016 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" event={"ID":"1c3e120a-2e84-4fb8-b93d-44eaebb61650","Type":"ContainerDied","Data":"41729b43ba759c789f07d0d0fa28ab90f7879f0236a0ce5fa11925eeb9de8b78"} Jan 03 03:28:52 crc kubenswrapper[4746]: I0103 03:28:52.492059 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:28:52 crc kubenswrapper[4746]: I0103 03:28:52.627096 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c3e120a-2e84-4fb8-b93d-44eaebb61650-util\") pod \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\" (UID: \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\") " Jan 03 03:28:52 crc kubenswrapper[4746]: I0103 03:28:52.627165 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c3e120a-2e84-4fb8-b93d-44eaebb61650-bundle\") pod \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\" (UID: \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\") " Jan 03 03:28:52 crc kubenswrapper[4746]: I0103 03:28:52.627295 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rqqc\" (UniqueName: \"kubernetes.io/projected/1c3e120a-2e84-4fb8-b93d-44eaebb61650-kube-api-access-2rqqc\") pod \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\" (UID: \"1c3e120a-2e84-4fb8-b93d-44eaebb61650\") " Jan 03 03:28:52 crc kubenswrapper[4746]: I0103 03:28:52.628645 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3e120a-2e84-4fb8-b93d-44eaebb61650-bundle" (OuterVolumeSpecName: "bundle") pod "1c3e120a-2e84-4fb8-b93d-44eaebb61650" (UID: "1c3e120a-2e84-4fb8-b93d-44eaebb61650"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:28:52 crc kubenswrapper[4746]: I0103 03:28:52.638022 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3e120a-2e84-4fb8-b93d-44eaebb61650-kube-api-access-2rqqc" (OuterVolumeSpecName: "kube-api-access-2rqqc") pod "1c3e120a-2e84-4fb8-b93d-44eaebb61650" (UID: "1c3e120a-2e84-4fb8-b93d-44eaebb61650"). InnerVolumeSpecName "kube-api-access-2rqqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:28:52 crc kubenswrapper[4746]: I0103 03:28:52.650003 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3e120a-2e84-4fb8-b93d-44eaebb61650-util" (OuterVolumeSpecName: "util") pod "1c3e120a-2e84-4fb8-b93d-44eaebb61650" (UID: "1c3e120a-2e84-4fb8-b93d-44eaebb61650"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:28:52 crc kubenswrapper[4746]: I0103 03:28:52.729507 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c3e120a-2e84-4fb8-b93d-44eaebb61650-util\") on node \"crc\" DevicePath \"\"" Jan 03 03:28:52 crc kubenswrapper[4746]: I0103 03:28:52.729556 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c3e120a-2e84-4fb8-b93d-44eaebb61650-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:28:52 crc kubenswrapper[4746]: I0103 03:28:52.729575 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rqqc\" (UniqueName: \"kubernetes.io/projected/1c3e120a-2e84-4fb8-b93d-44eaebb61650-kube-api-access-2rqqc\") on node \"crc\" DevicePath \"\"" Jan 03 03:28:53 crc kubenswrapper[4746]: I0103 03:28:53.121136 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" event={"ID":"1c3e120a-2e84-4fb8-b93d-44eaebb61650","Type":"ContainerDied","Data":"e7875166c35da6fb026ca2f079daa8b0498b0a72b3c1d32d54eb98e01634e7e7"} Jan 03 03:28:53 crc kubenswrapper[4746]: I0103 03:28:53.121184 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7875166c35da6fb026ca2f079daa8b0498b0a72b3c1d32d54eb98e01634e7e7" Jan 03 03:28:53 crc kubenswrapper[4746]: I0103 03:28:53.121425 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.075162 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll"] Jan 03 03:29:01 crc kubenswrapper[4746]: E0103 03:29:01.075830 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3e120a-2e84-4fb8-b93d-44eaebb61650" containerName="util" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.075841 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3e120a-2e84-4fb8-b93d-44eaebb61650" containerName="util" Jan 03 03:29:01 crc kubenswrapper[4746]: E0103 03:29:01.075855 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3e120a-2e84-4fb8-b93d-44eaebb61650" containerName="pull" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.075861 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3e120a-2e84-4fb8-b93d-44eaebb61650" containerName="pull" Jan 03 03:29:01 crc kubenswrapper[4746]: E0103 03:29:01.075868 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3e120a-2e84-4fb8-b93d-44eaebb61650" containerName="extract" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.075874 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3e120a-2e84-4fb8-b93d-44eaebb61650" containerName="extract" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.075972 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3e120a-2e84-4fb8-b93d-44eaebb61650" containerName="extract" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.076334 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.081789 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-m62nw" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.081962 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.082673 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.083852 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll"] Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.137509 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rrb6\" (UniqueName: \"kubernetes.io/projected/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-kube-api-access-4rrb6\") pod \"mariadb-operator-controller-manager-764d469f78-2w8ll\" (UID: \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\") " pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.137555 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-webhook-cert\") pod \"mariadb-operator-controller-manager-764d469f78-2w8ll\" (UID: \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\") " pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.137602 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-apiservice-cert\") pod \"mariadb-operator-controller-manager-764d469f78-2w8ll\" (UID: \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\") " pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.239401 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-webhook-cert\") pod \"mariadb-operator-controller-manager-764d469f78-2w8ll\" (UID: \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\") " pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.239476 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-apiservice-cert\") pod \"mariadb-operator-controller-manager-764d469f78-2w8ll\" (UID: \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\") " pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.239556 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rrb6\" (UniqueName: \"kubernetes.io/projected/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-kube-api-access-4rrb6\") pod \"mariadb-operator-controller-manager-764d469f78-2w8ll\" (UID: \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\") " pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.245435 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-webhook-cert\") pod \"mariadb-operator-controller-manager-764d469f78-2w8ll\" (UID: \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\") " pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.250291 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-apiservice-cert\") pod \"mariadb-operator-controller-manager-764d469f78-2w8ll\" (UID: \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\") " pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.259127 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rrb6\" (UniqueName: \"kubernetes.io/projected/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-kube-api-access-4rrb6\") pod \"mariadb-operator-controller-manager-764d469f78-2w8ll\" (UID: \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\") " pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.373028 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.373265 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.391438 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:01 crc kubenswrapper[4746]: I0103 03:29:01.792804 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll"] Jan 03 03:29:02 crc kubenswrapper[4746]: I0103 03:29:02.170744 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" event={"ID":"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5","Type":"ContainerStarted","Data":"03681c96517316481818d049b048dd0f7632e07b0a2a5c67dd067d8079ddfd92"} Jan 03 03:29:09 crc kubenswrapper[4746]: I0103 03:29:09.225459 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" event={"ID":"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5","Type":"ContainerStarted","Data":"be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf"} Jan 03 03:29:09 crc kubenswrapper[4746]: I0103 03:29:09.226040 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:09 crc kubenswrapper[4746]: I0103 03:29:09.247527 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" podStartSLOduration=1.420843297 podStartE2EDuration="8.247509253s" podCreationTimestamp="2026-01-03 03:29:01 +0000 UTC" firstStartedPulling="2026-01-03 03:29:01.801182702 +0000 UTC m=+861.651073007" lastFinishedPulling="2026-01-03 03:29:08.627848658 +0000 UTC m=+868.477738963" observedRunningTime="2026-01-03 03:29:09.244277324 +0000 UTC m=+869.094167669" watchObservedRunningTime="2026-01-03 03:29:09.247509253 +0000 UTC m=+869.097399568" Jan 03 03:29:21 crc kubenswrapper[4746]: I0103 03:29:21.404707 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:29:25 crc kubenswrapper[4746]: I0103 03:29:25.799053 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-x8z9w"] Jan 03 03:29:25 crc kubenswrapper[4746]: I0103 03:29:25.800208 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-x8z9w" Jan 03 03:29:25 crc kubenswrapper[4746]: I0103 03:29:25.804436 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-gk5q7" Jan 03 03:29:25 crc kubenswrapper[4746]: I0103 03:29:25.821006 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-x8z9w"] Jan 03 03:29:25 crc kubenswrapper[4746]: I0103 03:29:25.887646 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqq89\" (UniqueName: \"kubernetes.io/projected/f99c4889-1909-49e2-85f1-775dcc1abc94-kube-api-access-xqq89\") pod \"infra-operator-index-x8z9w\" (UID: \"f99c4889-1909-49e2-85f1-775dcc1abc94\") " pod="openstack-operators/infra-operator-index-x8z9w" Jan 03 03:29:25 crc kubenswrapper[4746]: I0103 03:29:25.988740 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqq89\" (UniqueName: \"kubernetes.io/projected/f99c4889-1909-49e2-85f1-775dcc1abc94-kube-api-access-xqq89\") pod \"infra-operator-index-x8z9w\" (UID: \"f99c4889-1909-49e2-85f1-775dcc1abc94\") " pod="openstack-operators/infra-operator-index-x8z9w" Jan 03 03:29:26 crc kubenswrapper[4746]: I0103 03:29:26.021880 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqq89\" (UniqueName: \"kubernetes.io/projected/f99c4889-1909-49e2-85f1-775dcc1abc94-kube-api-access-xqq89\") pod \"infra-operator-index-x8z9w\" (UID: \"f99c4889-1909-49e2-85f1-775dcc1abc94\") " pod="openstack-operators/infra-operator-index-x8z9w" Jan 03 03:29:26 crc kubenswrapper[4746]: I0103 03:29:26.124020 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-x8z9w" Jan 03 03:29:26 crc kubenswrapper[4746]: I0103 03:29:26.606665 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-x8z9w"] Jan 03 03:29:26 crc kubenswrapper[4746]: W0103 03:29:26.608991 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf99c4889_1909_49e2_85f1_775dcc1abc94.slice/crio-b7820f67854ad3640b701d252bb0ef901cc4ac4168f9793b88ec7cd21c4c96ee WatchSource:0}: Error finding container b7820f67854ad3640b701d252bb0ef901cc4ac4168f9793b88ec7cd21c4c96ee: Status 404 returned error can't find the container with id b7820f67854ad3640b701d252bb0ef901cc4ac4168f9793b88ec7cd21c4c96ee Jan 03 03:29:27 crc kubenswrapper[4746]: I0103 03:29:27.360362 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-x8z9w" event={"ID":"f99c4889-1909-49e2-85f1-775dcc1abc94","Type":"ContainerStarted","Data":"b7820f67854ad3640b701d252bb0ef901cc4ac4168f9793b88ec7cd21c4c96ee"} Jan 03 03:29:29 crc kubenswrapper[4746]: I0103 03:29:29.197445 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-x8z9w"] Jan 03 03:29:29 crc kubenswrapper[4746]: I0103 03:29:29.394885 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-x8z9w" event={"ID":"f99c4889-1909-49e2-85f1-775dcc1abc94","Type":"ContainerStarted","Data":"968f11c40d6b0167f34764cd9f646159b65663e730c63ff2d2cff9fd85094de7"} Jan 03 03:29:29 crc kubenswrapper[4746]: I0103 03:29:29.408711 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-x8z9w" podStartSLOduration=2.059921898 podStartE2EDuration="4.40864939s" podCreationTimestamp="2026-01-03 03:29:25 +0000 UTC" firstStartedPulling="2026-01-03 03:29:26.611224961 +0000 UTC m=+886.461115266" lastFinishedPulling="2026-01-03 03:29:28.959952443 +0000 UTC m=+888.809842758" observedRunningTime="2026-01-03 03:29:29.407728178 +0000 UTC m=+889.257618523" watchObservedRunningTime="2026-01-03 03:29:29.40864939 +0000 UTC m=+889.258539735" Jan 03 03:29:29 crc kubenswrapper[4746]: I0103 03:29:29.798098 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-nl4pn"] Jan 03 03:29:29 crc kubenswrapper[4746]: I0103 03:29:29.799030 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-nl4pn" Jan 03 03:29:29 crc kubenswrapper[4746]: I0103 03:29:29.814232 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-nl4pn"] Jan 03 03:29:29 crc kubenswrapper[4746]: I0103 03:29:29.980412 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dmmz\" (UniqueName: \"kubernetes.io/projected/5ae7d481-e881-447f-a645-f1fbb8acf420-kube-api-access-4dmmz\") pod \"infra-operator-index-nl4pn\" (UID: \"5ae7d481-e881-447f-a645-f1fbb8acf420\") " pod="openstack-operators/infra-operator-index-nl4pn" Jan 03 03:29:30 crc kubenswrapper[4746]: I0103 03:29:30.084573 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dmmz\" (UniqueName: \"kubernetes.io/projected/5ae7d481-e881-447f-a645-f1fbb8acf420-kube-api-access-4dmmz\") pod \"infra-operator-index-nl4pn\" (UID: \"5ae7d481-e881-447f-a645-f1fbb8acf420\") " pod="openstack-operators/infra-operator-index-nl4pn" Jan 03 03:29:30 crc kubenswrapper[4746]: I0103 03:29:30.136438 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dmmz\" (UniqueName: \"kubernetes.io/projected/5ae7d481-e881-447f-a645-f1fbb8acf420-kube-api-access-4dmmz\") pod \"infra-operator-index-nl4pn\" (UID: \"5ae7d481-e881-447f-a645-f1fbb8acf420\") " pod="openstack-operators/infra-operator-index-nl4pn" Jan 03 03:29:30 crc kubenswrapper[4746]: I0103 03:29:30.137832 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-nl4pn" Jan 03 03:29:30 crc kubenswrapper[4746]: I0103 03:29:30.421242 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-x8z9w" podUID="f99c4889-1909-49e2-85f1-775dcc1abc94" containerName="registry-server" containerID="cri-o://968f11c40d6b0167f34764cd9f646159b65663e730c63ff2d2cff9fd85094de7" gracePeriod=2 Jan 03 03:29:30 crc kubenswrapper[4746]: I0103 03:29:30.973016 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-nl4pn"] Jan 03 03:29:31 crc kubenswrapper[4746]: I0103 03:29:31.373645 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:29:31 crc kubenswrapper[4746]: I0103 03:29:31.373948 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:29:31 crc kubenswrapper[4746]: I0103 03:29:31.512892 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-nl4pn" event={"ID":"5ae7d481-e881-447f-a645-f1fbb8acf420","Type":"ContainerStarted","Data":"c5e464366975673762b23ac94e5e8a775fe43bb9396bce3df6728b9886d1599c"} Jan 03 03:29:31 crc kubenswrapper[4746]: I0103 03:29:31.514464 4746 generic.go:334] "Generic (PLEG): container finished" podID="f99c4889-1909-49e2-85f1-775dcc1abc94" containerID="968f11c40d6b0167f34764cd9f646159b65663e730c63ff2d2cff9fd85094de7" exitCode=0 Jan 03 03:29:31 crc kubenswrapper[4746]: I0103 03:29:31.514524 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-x8z9w" event={"ID":"f99c4889-1909-49e2-85f1-775dcc1abc94","Type":"ContainerDied","Data":"968f11c40d6b0167f34764cd9f646159b65663e730c63ff2d2cff9fd85094de7"} Jan 03 03:29:31 crc kubenswrapper[4746]: I0103 03:29:31.692117 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-x8z9w" Jan 03 03:29:31 crc kubenswrapper[4746]: I0103 03:29:31.817165 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqq89\" (UniqueName: \"kubernetes.io/projected/f99c4889-1909-49e2-85f1-775dcc1abc94-kube-api-access-xqq89\") pod \"f99c4889-1909-49e2-85f1-775dcc1abc94\" (UID: \"f99c4889-1909-49e2-85f1-775dcc1abc94\") " Jan 03 03:29:31 crc kubenswrapper[4746]: I0103 03:29:31.825582 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f99c4889-1909-49e2-85f1-775dcc1abc94-kube-api-access-xqq89" (OuterVolumeSpecName: "kube-api-access-xqq89") pod "f99c4889-1909-49e2-85f1-775dcc1abc94" (UID: "f99c4889-1909-49e2-85f1-775dcc1abc94"). InnerVolumeSpecName "kube-api-access-xqq89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:29:31 crc kubenswrapper[4746]: I0103 03:29:31.918846 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqq89\" (UniqueName: \"kubernetes.io/projected/f99c4889-1909-49e2-85f1-775dcc1abc94-kube-api-access-xqq89\") on node \"crc\" DevicePath \"\"" Jan 03 03:29:32 crc kubenswrapper[4746]: I0103 03:29:32.522611 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-nl4pn" event={"ID":"5ae7d481-e881-447f-a645-f1fbb8acf420","Type":"ContainerStarted","Data":"195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84"} Jan 03 03:29:32 crc kubenswrapper[4746]: I0103 03:29:32.525574 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-x8z9w" event={"ID":"f99c4889-1909-49e2-85f1-775dcc1abc94","Type":"ContainerDied","Data":"b7820f67854ad3640b701d252bb0ef901cc4ac4168f9793b88ec7cd21c4c96ee"} Jan 03 03:29:32 crc kubenswrapper[4746]: I0103 03:29:32.525673 4746 scope.go:117] "RemoveContainer" containerID="968f11c40d6b0167f34764cd9f646159b65663e730c63ff2d2cff9fd85094de7" Jan 03 03:29:32 crc kubenswrapper[4746]: I0103 03:29:32.525697 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-x8z9w" Jan 03 03:29:32 crc kubenswrapper[4746]: I0103 03:29:32.546452 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-nl4pn" podStartSLOduration=2.965318132 podStartE2EDuration="3.546432797s" podCreationTimestamp="2026-01-03 03:29:29 +0000 UTC" firstStartedPulling="2026-01-03 03:29:31.009836181 +0000 UTC m=+890.859726496" lastFinishedPulling="2026-01-03 03:29:31.590950846 +0000 UTC m=+891.440841161" observedRunningTime="2026-01-03 03:29:32.545704729 +0000 UTC m=+892.395595034" watchObservedRunningTime="2026-01-03 03:29:32.546432797 +0000 UTC m=+892.396323112" Jan 03 03:29:32 crc kubenswrapper[4746]: I0103 03:29:32.575131 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-x8z9w"] Jan 03 03:29:32 crc kubenswrapper[4746]: I0103 03:29:32.596537 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-x8z9w"] Jan 03 03:29:34 crc kubenswrapper[4746]: I0103 03:29:34.482934 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f99c4889-1909-49e2-85f1-775dcc1abc94" path="/var/lib/kubelet/pods/f99c4889-1909-49e2-85f1-775dcc1abc94/volumes" Jan 03 03:29:40 crc kubenswrapper[4746]: I0103 03:29:40.138393 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-nl4pn" Jan 03 03:29:40 crc kubenswrapper[4746]: I0103 03:29:40.139674 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-nl4pn" Jan 03 03:29:40 crc kubenswrapper[4746]: I0103 03:29:40.173445 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-nl4pn" Jan 03 03:29:40 crc kubenswrapper[4746]: I0103 03:29:40.623017 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-nl4pn" Jan 03 03:29:42 crc kubenswrapper[4746]: I0103 03:29:42.845601 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h"] Jan 03 03:29:42 crc kubenswrapper[4746]: E0103 03:29:42.846152 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99c4889-1909-49e2-85f1-775dcc1abc94" containerName="registry-server" Jan 03 03:29:42 crc kubenswrapper[4746]: I0103 03:29:42.846165 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99c4889-1909-49e2-85f1-775dcc1abc94" containerName="registry-server" Jan 03 03:29:42 crc kubenswrapper[4746]: I0103 03:29:42.846260 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f99c4889-1909-49e2-85f1-775dcc1abc94" containerName="registry-server" Jan 03 03:29:42 crc kubenswrapper[4746]: I0103 03:29:42.847154 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:42 crc kubenswrapper[4746]: I0103 03:29:42.850020 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hpjh5" Jan 03 03:29:42 crc kubenswrapper[4746]: I0103 03:29:42.859966 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h"] Jan 03 03:29:42 crc kubenswrapper[4746]: I0103 03:29:42.924592 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33f665b3-fc2a-41b5-8d80-9601a4af8271-bundle\") pod \"1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h\" (UID: \"33f665b3-fc2a-41b5-8d80-9601a4af8271\") " pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:42 crc kubenswrapper[4746]: I0103 03:29:42.924720 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33f665b3-fc2a-41b5-8d80-9601a4af8271-util\") pod \"1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h\" (UID: \"33f665b3-fc2a-41b5-8d80-9601a4af8271\") " pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:42 crc kubenswrapper[4746]: I0103 03:29:42.924764 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbg6j\" (UniqueName: \"kubernetes.io/projected/33f665b3-fc2a-41b5-8d80-9601a4af8271-kube-api-access-bbg6j\") pod \"1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h\" (UID: \"33f665b3-fc2a-41b5-8d80-9601a4af8271\") " pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:43 crc kubenswrapper[4746]: I0103 03:29:43.025848 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33f665b3-fc2a-41b5-8d80-9601a4af8271-bundle\") pod \"1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h\" (UID: \"33f665b3-fc2a-41b5-8d80-9601a4af8271\") " pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:43 crc kubenswrapper[4746]: I0103 03:29:43.025894 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33f665b3-fc2a-41b5-8d80-9601a4af8271-util\") pod \"1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h\" (UID: \"33f665b3-fc2a-41b5-8d80-9601a4af8271\") " pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:43 crc kubenswrapper[4746]: I0103 03:29:43.025916 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbg6j\" (UniqueName: \"kubernetes.io/projected/33f665b3-fc2a-41b5-8d80-9601a4af8271-kube-api-access-bbg6j\") pod \"1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h\" (UID: \"33f665b3-fc2a-41b5-8d80-9601a4af8271\") " pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:43 crc kubenswrapper[4746]: I0103 03:29:43.026468 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33f665b3-fc2a-41b5-8d80-9601a4af8271-bundle\") pod \"1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h\" (UID: \"33f665b3-fc2a-41b5-8d80-9601a4af8271\") " pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:43 crc kubenswrapper[4746]: I0103 03:29:43.026591 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33f665b3-fc2a-41b5-8d80-9601a4af8271-util\") pod \"1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h\" (UID: \"33f665b3-fc2a-41b5-8d80-9601a4af8271\") " pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:43 crc kubenswrapper[4746]: I0103 03:29:43.061097 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbg6j\" (UniqueName: \"kubernetes.io/projected/33f665b3-fc2a-41b5-8d80-9601a4af8271-kube-api-access-bbg6j\") pod \"1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h\" (UID: \"33f665b3-fc2a-41b5-8d80-9601a4af8271\") " pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:43 crc kubenswrapper[4746]: I0103 03:29:43.205115 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:43 crc kubenswrapper[4746]: I0103 03:29:43.473439 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h"] Jan 03 03:29:43 crc kubenswrapper[4746]: I0103 03:29:43.626365 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" event={"ID":"33f665b3-fc2a-41b5-8d80-9601a4af8271","Type":"ContainerStarted","Data":"2f41f1497fd335760918fdb31f7572449d5c70ebcf3e956274a4cef167e525e1"} Jan 03 03:29:44 crc kubenswrapper[4746]: I0103 03:29:44.647764 4746 generic.go:334] "Generic (PLEG): container finished" podID="33f665b3-fc2a-41b5-8d80-9601a4af8271" containerID="031dc780099bf16bd57112039bd8b1bb2cc4d6de23290420a71b60df7eb266c2" exitCode=0 Jan 03 03:29:44 crc kubenswrapper[4746]: I0103 03:29:44.647837 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" event={"ID":"33f665b3-fc2a-41b5-8d80-9601a4af8271","Type":"ContainerDied","Data":"031dc780099bf16bd57112039bd8b1bb2cc4d6de23290420a71b60df7eb266c2"} Jan 03 03:29:46 crc kubenswrapper[4746]: I0103 03:29:46.660669 4746 generic.go:334] "Generic (PLEG): container finished" podID="33f665b3-fc2a-41b5-8d80-9601a4af8271" containerID="4b008c7dc5acbf9c1310c864311ff69de9c3c9da11286142b311d846a20d6889" exitCode=0 Jan 03 03:29:46 crc kubenswrapper[4746]: I0103 03:29:46.660837 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" event={"ID":"33f665b3-fc2a-41b5-8d80-9601a4af8271","Type":"ContainerDied","Data":"4b008c7dc5acbf9c1310c864311ff69de9c3c9da11286142b311d846a20d6889"} Jan 03 03:29:47 crc kubenswrapper[4746]: I0103 03:29:47.669232 4746 generic.go:334] "Generic (PLEG): container finished" podID="33f665b3-fc2a-41b5-8d80-9601a4af8271" containerID="de071812d24df951dff8d6d5838b50f980d010a41a74dfbeb12915581ac88aa8" exitCode=0 Jan 03 03:29:47 crc kubenswrapper[4746]: I0103 03:29:47.669277 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" event={"ID":"33f665b3-fc2a-41b5-8d80-9601a4af8271","Type":"ContainerDied","Data":"de071812d24df951dff8d6d5838b50f980d010a41a74dfbeb12915581ac88aa8"} Jan 03 03:29:48 crc kubenswrapper[4746]: I0103 03:29:48.958885 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:49 crc kubenswrapper[4746]: I0103 03:29:49.063502 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbg6j\" (UniqueName: \"kubernetes.io/projected/33f665b3-fc2a-41b5-8d80-9601a4af8271-kube-api-access-bbg6j\") pod \"33f665b3-fc2a-41b5-8d80-9601a4af8271\" (UID: \"33f665b3-fc2a-41b5-8d80-9601a4af8271\") " Jan 03 03:29:49 crc kubenswrapper[4746]: I0103 03:29:49.063585 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33f665b3-fc2a-41b5-8d80-9601a4af8271-util\") pod \"33f665b3-fc2a-41b5-8d80-9601a4af8271\" (UID: \"33f665b3-fc2a-41b5-8d80-9601a4af8271\") " Jan 03 03:29:49 crc kubenswrapper[4746]: I0103 03:29:49.063710 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33f665b3-fc2a-41b5-8d80-9601a4af8271-bundle\") pod \"33f665b3-fc2a-41b5-8d80-9601a4af8271\" (UID: \"33f665b3-fc2a-41b5-8d80-9601a4af8271\") " Jan 03 03:29:49 crc kubenswrapper[4746]: I0103 03:29:49.065882 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33f665b3-fc2a-41b5-8d80-9601a4af8271-bundle" (OuterVolumeSpecName: "bundle") pod "33f665b3-fc2a-41b5-8d80-9601a4af8271" (UID: "33f665b3-fc2a-41b5-8d80-9601a4af8271"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:29:49 crc kubenswrapper[4746]: I0103 03:29:49.070088 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f665b3-fc2a-41b5-8d80-9601a4af8271-kube-api-access-bbg6j" (OuterVolumeSpecName: "kube-api-access-bbg6j") pod "33f665b3-fc2a-41b5-8d80-9601a4af8271" (UID: "33f665b3-fc2a-41b5-8d80-9601a4af8271"). InnerVolumeSpecName "kube-api-access-bbg6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:29:49 crc kubenswrapper[4746]: I0103 03:29:49.074478 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33f665b3-fc2a-41b5-8d80-9601a4af8271-util" (OuterVolumeSpecName: "util") pod "33f665b3-fc2a-41b5-8d80-9601a4af8271" (UID: "33f665b3-fc2a-41b5-8d80-9601a4af8271"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:29:49 crc kubenswrapper[4746]: I0103 03:29:49.165067 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/33f665b3-fc2a-41b5-8d80-9601a4af8271-util\") on node \"crc\" DevicePath \"\"" Jan 03 03:29:49 crc kubenswrapper[4746]: I0103 03:29:49.165096 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/33f665b3-fc2a-41b5-8d80-9601a4af8271-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:29:49 crc kubenswrapper[4746]: I0103 03:29:49.165105 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbg6j\" (UniqueName: \"kubernetes.io/projected/33f665b3-fc2a-41b5-8d80-9601a4af8271-kube-api-access-bbg6j\") on node \"crc\" DevicePath \"\"" Jan 03 03:29:49 crc kubenswrapper[4746]: I0103 03:29:49.685208 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" event={"ID":"33f665b3-fc2a-41b5-8d80-9601a4af8271","Type":"ContainerDied","Data":"2f41f1497fd335760918fdb31f7572449d5c70ebcf3e956274a4cef167e525e1"} Jan 03 03:29:49 crc kubenswrapper[4746]: I0103 03:29:49.685258 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f41f1497fd335760918fdb31f7572449d5c70ebcf3e956274a4cef167e525e1" Jan 03 03:29:49 crc kubenswrapper[4746]: I0103 03:29:49.685284 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.399785 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-42wrb"] Jan 03 03:29:51 crc kubenswrapper[4746]: E0103 03:29:51.400324 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f665b3-fc2a-41b5-8d80-9601a4af8271" containerName="util" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.400338 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f665b3-fc2a-41b5-8d80-9601a4af8271" containerName="util" Jan 03 03:29:51 crc kubenswrapper[4746]: E0103 03:29:51.400355 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f665b3-fc2a-41b5-8d80-9601a4af8271" containerName="pull" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.400363 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f665b3-fc2a-41b5-8d80-9601a4af8271" containerName="pull" Jan 03 03:29:51 crc kubenswrapper[4746]: E0103 03:29:51.400381 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f665b3-fc2a-41b5-8d80-9601a4af8271" containerName="extract" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.400392 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f665b3-fc2a-41b5-8d80-9601a4af8271" containerName="extract" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.400540 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f665b3-fc2a-41b5-8d80-9601a4af8271" containerName="extract" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.404877 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.416072 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-42wrb"] Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.500829 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdwtg\" (UniqueName: \"kubernetes.io/projected/192618f6-c968-4edc-bded-d7289abafcae-kube-api-access-rdwtg\") pod \"redhat-operators-42wrb\" (UID: \"192618f6-c968-4edc-bded-d7289abafcae\") " pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.500917 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192618f6-c968-4edc-bded-d7289abafcae-catalog-content\") pod \"redhat-operators-42wrb\" (UID: \"192618f6-c968-4edc-bded-d7289abafcae\") " pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.500981 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192618f6-c968-4edc-bded-d7289abafcae-utilities\") pod \"redhat-operators-42wrb\" (UID: \"192618f6-c968-4edc-bded-d7289abafcae\") " pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.601728 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwtg\" (UniqueName: \"kubernetes.io/projected/192618f6-c968-4edc-bded-d7289abafcae-kube-api-access-rdwtg\") pod \"redhat-operators-42wrb\" (UID: \"192618f6-c968-4edc-bded-d7289abafcae\") " pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.601793 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192618f6-c968-4edc-bded-d7289abafcae-catalog-content\") pod \"redhat-operators-42wrb\" (UID: \"192618f6-c968-4edc-bded-d7289abafcae\") " pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.601840 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192618f6-c968-4edc-bded-d7289abafcae-utilities\") pod \"redhat-operators-42wrb\" (UID: \"192618f6-c968-4edc-bded-d7289abafcae\") " pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.602333 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192618f6-c968-4edc-bded-d7289abafcae-utilities\") pod \"redhat-operators-42wrb\" (UID: \"192618f6-c968-4edc-bded-d7289abafcae\") " pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.602402 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192618f6-c968-4edc-bded-d7289abafcae-catalog-content\") pod \"redhat-operators-42wrb\" (UID: \"192618f6-c968-4edc-bded-d7289abafcae\") " pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.621553 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwtg\" (UniqueName: \"kubernetes.io/projected/192618f6-c968-4edc-bded-d7289abafcae-kube-api-access-rdwtg\") pod \"redhat-operators-42wrb\" (UID: \"192618f6-c968-4edc-bded-d7289abafcae\") " pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:29:51 crc kubenswrapper[4746]: I0103 03:29:51.724219 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:29:52 crc kubenswrapper[4746]: I0103 03:29:52.134134 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-42wrb"] Jan 03 03:29:52 crc kubenswrapper[4746]: I0103 03:29:52.703930 4746 generic.go:334] "Generic (PLEG): container finished" podID="192618f6-c968-4edc-bded-d7289abafcae" containerID="6539aed87edab5a4195bb9de453dc6ca3bcdafc8ae5645369c0d63fe361b2bc1" exitCode=0 Jan 03 03:29:52 crc kubenswrapper[4746]: I0103 03:29:52.703986 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42wrb" event={"ID":"192618f6-c968-4edc-bded-d7289abafcae","Type":"ContainerDied","Data":"6539aed87edab5a4195bb9de453dc6ca3bcdafc8ae5645369c0d63fe361b2bc1"} Jan 03 03:29:52 crc kubenswrapper[4746]: I0103 03:29:52.704043 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42wrb" event={"ID":"192618f6-c968-4edc-bded-d7289abafcae","Type":"ContainerStarted","Data":"001e5a6c08f7c8f5b8b701fcf63bc099068571973acc0e558375f807910ffc4b"} Jan 03 03:29:53 crc kubenswrapper[4746]: I0103 03:29:53.710966 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42wrb" event={"ID":"192618f6-c968-4edc-bded-d7289abafcae","Type":"ContainerStarted","Data":"bbd4ba49ededcb085516e89a2d22015c02cfdc5dcaff1d13b1a66593d5e94a57"} Jan 03 03:29:55 crc kubenswrapper[4746]: I0103 03:29:55.723007 4746 generic.go:334] "Generic (PLEG): container finished" podID="192618f6-c968-4edc-bded-d7289abafcae" containerID="bbd4ba49ededcb085516e89a2d22015c02cfdc5dcaff1d13b1a66593d5e94a57" exitCode=0 Jan 03 03:29:55 crc kubenswrapper[4746]: I0103 03:29:55.723333 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42wrb" event={"ID":"192618f6-c968-4edc-bded-d7289abafcae","Type":"ContainerDied","Data":"bbd4ba49ededcb085516e89a2d22015c02cfdc5dcaff1d13b1a66593d5e94a57"} Jan 03 03:29:56 crc kubenswrapper[4746]: I0103 03:29:56.731848 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42wrb" event={"ID":"192618f6-c968-4edc-bded-d7289abafcae","Type":"ContainerStarted","Data":"8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1"} Jan 03 03:29:56 crc kubenswrapper[4746]: I0103 03:29:56.749248 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-42wrb" podStartSLOduration=2.299785029 podStartE2EDuration="5.749229354s" podCreationTimestamp="2026-01-03 03:29:51 +0000 UTC" firstStartedPulling="2026-01-03 03:29:52.705930824 +0000 UTC m=+912.555821119" lastFinishedPulling="2026-01-03 03:29:56.155375139 +0000 UTC m=+916.005265444" observedRunningTime="2026-01-03 03:29:56.746164699 +0000 UTC m=+916.596055024" watchObservedRunningTime="2026-01-03 03:29:56.749229354 +0000 UTC m=+916.599119679" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.162193 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s"] Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.163283 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.172132 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.172444 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.178666 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s"] Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.253545 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzzxf\" (UniqueName: \"kubernetes.io/projected/75f02855-14db-453e-affc-0efbc2538f0b-kube-api-access-hzzxf\") pod \"collect-profiles-29456850-h676s\" (UID: \"75f02855-14db-453e-affc-0efbc2538f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.253580 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75f02855-14db-453e-affc-0efbc2538f0b-secret-volume\") pod \"collect-profiles-29456850-h676s\" (UID: \"75f02855-14db-453e-affc-0efbc2538f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.253725 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75f02855-14db-453e-affc-0efbc2538f0b-config-volume\") pod \"collect-profiles-29456850-h676s\" (UID: \"75f02855-14db-453e-affc-0efbc2538f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.354744 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzzxf\" (UniqueName: \"kubernetes.io/projected/75f02855-14db-453e-affc-0efbc2538f0b-kube-api-access-hzzxf\") pod \"collect-profiles-29456850-h676s\" (UID: \"75f02855-14db-453e-affc-0efbc2538f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.354801 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75f02855-14db-453e-affc-0efbc2538f0b-secret-volume\") pod \"collect-profiles-29456850-h676s\" (UID: \"75f02855-14db-453e-affc-0efbc2538f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.354876 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75f02855-14db-453e-affc-0efbc2538f0b-config-volume\") pod \"collect-profiles-29456850-h676s\" (UID: \"75f02855-14db-453e-affc-0efbc2538f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.355853 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75f02855-14db-453e-affc-0efbc2538f0b-config-volume\") pod \"collect-profiles-29456850-h676s\" (UID: \"75f02855-14db-453e-affc-0efbc2538f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.361032 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75f02855-14db-453e-affc-0efbc2538f0b-secret-volume\") pod \"collect-profiles-29456850-h676s\" (UID: \"75f02855-14db-453e-affc-0efbc2538f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.383113 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzzxf\" (UniqueName: \"kubernetes.io/projected/75f02855-14db-453e-affc-0efbc2538f0b-kube-api-access-hzzxf\") pod \"collect-profiles-29456850-h676s\" (UID: \"75f02855-14db-453e-affc-0efbc2538f0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.488972 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/openstack-galera-0"] Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.490108 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.492270 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"barbican-kuttl-tests"/"openstack-config-data" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.492416 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"barbican-kuttl-tests"/"kube-root-ca.crt" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.492528 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"barbican-kuttl-tests"/"openshift-service-ca.crt" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.498588 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"barbican-kuttl-tests"/"openstack-scripts" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.505631 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"galera-openstack-dockercfg-82mft" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.505849 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/openstack-galera-1"] Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.507058 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.511273 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/openstack-galera-2"] Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.512421 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.517578 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/openstack-galera-0"] Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.539412 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/openstack-galera-2"] Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.541398 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.545823 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/openstack-galera-1"] Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.659676 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-kolla-config\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.659730 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8crkr\" (UniqueName: \"kubernetes.io/projected/c201de48-9fda-488a-9ca1-d6cb8cc085c5-kube-api-access-8crkr\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.659769 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.659810 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-config-data-default\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.659842 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-config-data-default\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.659863 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.659892 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-operator-scripts\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.659916 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz67l\" (UniqueName: \"kubernetes.io/projected/3b854ff9-97ba-4cd2-9136-db9e311d5e94-kube-api-access-xz67l\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.659948 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59srz\" (UniqueName: \"kubernetes.io/projected/0a58aeed-241f-4361-8570-043366a4a146-kube-api-access-59srz\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.659969 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3b854ff9-97ba-4cd2-9136-db9e311d5e94-config-data-generated\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.659997 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-kolla-config\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.660017 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.660039 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-config-data-default\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.660066 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.660089 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c201de48-9fda-488a-9ca1-d6cb8cc085c5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.660119 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a58aeed-241f-4361-8570-043366a4a146-config-data-generated\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.660138 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-kolla-config\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.660156 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-operator-scripts\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761427 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-kolla-config\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761478 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761509 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-config-data-default\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761541 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761566 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c201de48-9fda-488a-9ca1-d6cb8cc085c5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761601 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a58aeed-241f-4361-8570-043366a4a146-config-data-generated\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761622 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-kolla-config\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761645 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-operator-scripts\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761684 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-kolla-config\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761710 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8crkr\" (UniqueName: \"kubernetes.io/projected/c201de48-9fda-488a-9ca1-d6cb8cc085c5-kube-api-access-8crkr\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761735 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761805 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-config-data-default\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761841 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-config-data-default\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761889 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761926 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-operator-scripts\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761973 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz67l\" (UniqueName: \"kubernetes.io/projected/3b854ff9-97ba-4cd2-9136-db9e311d5e94-kube-api-access-xz67l\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.761999 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59srz\" (UniqueName: \"kubernetes.io/projected/0a58aeed-241f-4361-8570-043366a4a146-kube-api-access-59srz\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.762050 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3b854ff9-97ba-4cd2-9136-db9e311d5e94-config-data-generated\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.762693 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3b854ff9-97ba-4cd2-9136-db9e311d5e94-config-data-generated\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.763351 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-kolla-config\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.763461 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-kolla-config\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.763790 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") device mount path \"/mnt/openstack/pv01\"" pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.763810 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") device mount path \"/mnt/openstack/pv12\"" pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.764200 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-config-data-default\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.764270 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") device mount path \"/mnt/openstack/pv09\"" pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.765056 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c201de48-9fda-488a-9ca1-d6cb8cc085c5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.765578 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a58aeed-241f-4361-8570-043366a4a146-config-data-generated\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.767457 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-config-data-default\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.768133 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-operator-scripts\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.773553 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.779637 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-kolla-config\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.780118 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-operator-scripts\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.780414 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-config-data-default\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.783795 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.783883 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8crkr\" (UniqueName: \"kubernetes.io/projected/c201de48-9fda-488a-9ca1-d6cb8cc085c5-kube-api-access-8crkr\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.786795 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz67l\" (UniqueName: \"kubernetes.io/projected/3b854ff9-97ba-4cd2-9136-db9e311d5e94-kube-api-access-xz67l\") pod \"openstack-galera-1\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.786836 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59srz\" (UniqueName: \"kubernetes.io/projected/0a58aeed-241f-4361-8570-043366a4a146-kube-api-access-59srz\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.788129 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.797500 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.808997 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.829172 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.829613 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:00 crc kubenswrapper[4746]: I0103 03:30:00.935908 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s"] Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.026959 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8"] Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.029364 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.041847 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.042158 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2z5s8" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.046143 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8"] Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.176836 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-apiservice-cert\") pod \"infra-operator-controller-manager-74dd87d6d6-9nvt8\" (UID: \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\") " pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.177150 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-webhook-cert\") pod \"infra-operator-controller-manager-74dd87d6d6-9nvt8\" (UID: \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\") " pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.177181 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fftn2\" (UniqueName: \"kubernetes.io/projected/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-kube-api-access-fftn2\") pod \"infra-operator-controller-manager-74dd87d6d6-9nvt8\" (UID: \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\") " pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.278763 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-apiservice-cert\") pod \"infra-operator-controller-manager-74dd87d6d6-9nvt8\" (UID: \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\") " pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.278816 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-webhook-cert\") pod \"infra-operator-controller-manager-74dd87d6d6-9nvt8\" (UID: \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\") " pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.278868 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fftn2\" (UniqueName: \"kubernetes.io/projected/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-kube-api-access-fftn2\") pod \"infra-operator-controller-manager-74dd87d6d6-9nvt8\" (UID: \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\") " pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.284470 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-webhook-cert\") pod \"infra-operator-controller-manager-74dd87d6d6-9nvt8\" (UID: \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\") " pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.286430 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-apiservice-cert\") pod \"infra-operator-controller-manager-74dd87d6d6-9nvt8\" (UID: \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\") " pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.337251 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fftn2\" (UniqueName: \"kubernetes.io/projected/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-kube-api-access-fftn2\") pod \"infra-operator-controller-manager-74dd87d6d6-9nvt8\" (UID: \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\") " pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.375372 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.375427 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.375477 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.376174 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf02736da0e4a31633cefadb1cc120b93c49d7b864f32b5d90a19ffe5e5a589f"} pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.376238 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" containerID="cri-o://bf02736da0e4a31633cefadb1cc120b93c49d7b864f32b5d90a19ffe5e5a589f" gracePeriod=600 Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.399887 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/openstack-galera-1"] Jan 03 03:30:01 crc kubenswrapper[4746]: W0103 03:30:01.410120 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b854ff9_97ba_4cd2_9136_db9e311d5e94.slice/crio-5c7f8be799cb70f1a83480b6b4716e6d4ed652f10717dd2f361f688ed024d166 WatchSource:0}: Error finding container 5c7f8be799cb70f1a83480b6b4716e6d4ed652f10717dd2f361f688ed024d166: Status 404 returned error can't find the container with id 5c7f8be799cb70f1a83480b6b4716e6d4ed652f10717dd2f361f688ed024d166 Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.444229 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.469970 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/openstack-galera-2"] Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.475829 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/openstack-galera-0"] Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.640130 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8"] Jan 03 03:30:01 crc kubenswrapper[4746]: W0103 03:30:01.647987 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0dc4ece_d74b_462f_9f80_9b5e3b7abe22.slice/crio-0042a9c6f5ad9cb88eda85458bff1fc6e7b31d21f8ee2667af7cd13b049804b1 WatchSource:0}: Error finding container 0042a9c6f5ad9cb88eda85458bff1fc6e7b31d21f8ee2667af7cd13b049804b1: Status 404 returned error can't find the container with id 0042a9c6f5ad9cb88eda85458bff1fc6e7b31d21f8ee2667af7cd13b049804b1 Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.724978 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.725039 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.765194 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-2" event={"ID":"0a58aeed-241f-4361-8570-043366a4a146","Type":"ContainerStarted","Data":"0bdb065a9117b3d443635acc904f0b743c9935bb779a30726611e66de92db38f"} Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.766232 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-1" event={"ID":"3b854ff9-97ba-4cd2-9136-db9e311d5e94","Type":"ContainerStarted","Data":"5c7f8be799cb70f1a83480b6b4716e6d4ed652f10717dd2f361f688ed024d166"} Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.767634 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" event={"ID":"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22","Type":"ContainerStarted","Data":"0042a9c6f5ad9cb88eda85458bff1fc6e7b31d21f8ee2667af7cd13b049804b1"} Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.768639 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-0" event={"ID":"c201de48-9fda-488a-9ca1-d6cb8cc085c5","Type":"ContainerStarted","Data":"9cb2ab6664d7da24d103b44fc00c4b710fd36df457bd13ac30f100bcb8e3c1e5"} Jan 03 03:30:01 crc kubenswrapper[4746]: I0103 03:30:01.770843 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" event={"ID":"75f02855-14db-453e-affc-0efbc2538f0b","Type":"ContainerStarted","Data":"279c012b3f881041f538f74ff91d68314ebdbee6dc23b36c13775ae15eecc2dc"} Jan 03 03:30:02 crc kubenswrapper[4746]: I0103 03:30:02.770575 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-42wrb" podUID="192618f6-c968-4edc-bded-d7289abafcae" containerName="registry-server" probeResult="failure" output=< Jan 03 03:30:02 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Jan 03 03:30:02 crc kubenswrapper[4746]: > Jan 03 03:30:02 crc kubenswrapper[4746]: I0103 03:30:02.784070 4746 generic.go:334] "Generic (PLEG): container finished" podID="75f02855-14db-453e-affc-0efbc2538f0b" containerID="0ce4b61c2e3763485c8714d38114134e682a7469eda2bccfec366770fef0fa93" exitCode=0 Jan 03 03:30:02 crc kubenswrapper[4746]: I0103 03:30:02.784207 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" event={"ID":"75f02855-14db-453e-affc-0efbc2538f0b","Type":"ContainerDied","Data":"0ce4b61c2e3763485c8714d38114134e682a7469eda2bccfec366770fef0fa93"} Jan 03 03:30:02 crc kubenswrapper[4746]: I0103 03:30:02.793949 4746 generic.go:334] "Generic (PLEG): container finished" podID="00b3b853-9953-4039-964d-841a01708848" containerID="bf02736da0e4a31633cefadb1cc120b93c49d7b864f32b5d90a19ffe5e5a589f" exitCode=0 Jan 03 03:30:02 crc kubenswrapper[4746]: I0103 03:30:02.794003 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerDied","Data":"bf02736da0e4a31633cefadb1cc120b93c49d7b864f32b5d90a19ffe5e5a589f"} Jan 03 03:30:02 crc kubenswrapper[4746]: I0103 03:30:02.794035 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerStarted","Data":"eb6d369458d9ac55bbd1588092e61e42f348a71a898ff19ed28c8341fef5065e"} Jan 03 03:30:02 crc kubenswrapper[4746]: I0103 03:30:02.794054 4746 scope.go:117] "RemoveContainer" containerID="4e73d799a311783ed2ed25907dcb1be6ade63e15caa315b94224accb77b9a4df" Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.163289 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.234521 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75f02855-14db-453e-affc-0efbc2538f0b-secret-volume\") pod \"75f02855-14db-453e-affc-0efbc2538f0b\" (UID: \"75f02855-14db-453e-affc-0efbc2538f0b\") " Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.234783 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzzxf\" (UniqueName: \"kubernetes.io/projected/75f02855-14db-453e-affc-0efbc2538f0b-kube-api-access-hzzxf\") pod \"75f02855-14db-453e-affc-0efbc2538f0b\" (UID: \"75f02855-14db-453e-affc-0efbc2538f0b\") " Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.234868 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75f02855-14db-453e-affc-0efbc2538f0b-config-volume\") pod \"75f02855-14db-453e-affc-0efbc2538f0b\" (UID: \"75f02855-14db-453e-affc-0efbc2538f0b\") " Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.235684 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f02855-14db-453e-affc-0efbc2538f0b-config-volume" (OuterVolumeSpecName: "config-volume") pod "75f02855-14db-453e-affc-0efbc2538f0b" (UID: "75f02855-14db-453e-affc-0efbc2538f0b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.241385 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f02855-14db-453e-affc-0efbc2538f0b-kube-api-access-hzzxf" (OuterVolumeSpecName: "kube-api-access-hzzxf") pod "75f02855-14db-453e-affc-0efbc2538f0b" (UID: "75f02855-14db-453e-affc-0efbc2538f0b"). InnerVolumeSpecName "kube-api-access-hzzxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.248971 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f02855-14db-453e-affc-0efbc2538f0b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75f02855-14db-453e-affc-0efbc2538f0b" (UID: "75f02855-14db-453e-affc-0efbc2538f0b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.336811 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzzxf\" (UniqueName: \"kubernetes.io/projected/75f02855-14db-453e-affc-0efbc2538f0b-kube-api-access-hzzxf\") on node \"crc\" DevicePath \"\"" Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.336845 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75f02855-14db-453e-affc-0efbc2538f0b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.336861 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75f02855-14db-453e-affc-0efbc2538f0b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.819084 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" event={"ID":"75f02855-14db-453e-affc-0efbc2538f0b","Type":"ContainerDied","Data":"279c012b3f881041f538f74ff91d68314ebdbee6dc23b36c13775ae15eecc2dc"} Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.819124 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456850-h676s" Jan 03 03:30:04 crc kubenswrapper[4746]: I0103 03:30:04.819132 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="279c012b3f881041f538f74ff91d68314ebdbee6dc23b36c13775ae15eecc2dc" Jan 03 03:30:11 crc kubenswrapper[4746]: I0103 03:30:11.791028 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:30:11 crc kubenswrapper[4746]: I0103 03:30:11.840096 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:30:14 crc kubenswrapper[4746]: I0103 03:30:14.190627 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-42wrb"] Jan 03 03:30:14 crc kubenswrapper[4746]: I0103 03:30:14.191524 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-42wrb" podUID="192618f6-c968-4edc-bded-d7289abafcae" containerName="registry-server" containerID="cri-o://8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1" gracePeriod=2 Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.244621 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.372199 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192618f6-c968-4edc-bded-d7289abafcae-utilities\") pod \"192618f6-c968-4edc-bded-d7289abafcae\" (UID: \"192618f6-c968-4edc-bded-d7289abafcae\") " Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.372802 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdwtg\" (UniqueName: \"kubernetes.io/projected/192618f6-c968-4edc-bded-d7289abafcae-kube-api-access-rdwtg\") pod \"192618f6-c968-4edc-bded-d7289abafcae\" (UID: \"192618f6-c968-4edc-bded-d7289abafcae\") " Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.373013 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192618f6-c968-4edc-bded-d7289abafcae-catalog-content\") pod \"192618f6-c968-4edc-bded-d7289abafcae\" (UID: \"192618f6-c968-4edc-bded-d7289abafcae\") " Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.373303 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192618f6-c968-4edc-bded-d7289abafcae-utilities" (OuterVolumeSpecName: "utilities") pod "192618f6-c968-4edc-bded-d7289abafcae" (UID: "192618f6-c968-4edc-bded-d7289abafcae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.373641 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192618f6-c968-4edc-bded-d7289abafcae-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.378290 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192618f6-c968-4edc-bded-d7289abafcae-kube-api-access-rdwtg" (OuterVolumeSpecName: "kube-api-access-rdwtg") pod "192618f6-c968-4edc-bded-d7289abafcae" (UID: "192618f6-c968-4edc-bded-d7289abafcae"). InnerVolumeSpecName "kube-api-access-rdwtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.478425 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdwtg\" (UniqueName: \"kubernetes.io/projected/192618f6-c968-4edc-bded-d7289abafcae-kube-api-access-rdwtg\") on node \"crc\" DevicePath \"\"" Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.501520 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192618f6-c968-4edc-bded-d7289abafcae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "192618f6-c968-4edc-bded-d7289abafcae" (UID: "192618f6-c968-4edc-bded-d7289abafcae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.580547 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192618f6-c968-4edc-bded-d7289abafcae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.910714 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" event={"ID":"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22","Type":"ContainerStarted","Data":"5a1586d3c152a22244938286070f56ecfe1e54de4ded7b3df6c8682b79218676"} Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.914605 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-0" event={"ID":"c201de48-9fda-488a-9ca1-d6cb8cc085c5","Type":"ContainerStarted","Data":"2c6b2e1cef5c9b100a1d4e305294bbb713c9fa4c864e922c3ffacbc2d992507f"} Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.917006 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-1" event={"ID":"3b854ff9-97ba-4cd2-9136-db9e311d5e94","Type":"ContainerStarted","Data":"44137e4d944c2589e25d61eaffc5f039cfa5f51d7812a9a7b7370b5da6a35a69"} Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.919916 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-2" event={"ID":"0a58aeed-241f-4361-8570-043366a4a146","Type":"ContainerStarted","Data":"5777d2f68feb02bb9241dc831551ce9aec835f696c08c427127f44dde71c2372"} Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.924087 4746 generic.go:334] "Generic (PLEG): container finished" podID="192618f6-c968-4edc-bded-d7289abafcae" containerID="8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1" exitCode=0 Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.924157 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42wrb" Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.924165 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42wrb" event={"ID":"192618f6-c968-4edc-bded-d7289abafcae","Type":"ContainerDied","Data":"8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1"} Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.924738 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42wrb" event={"ID":"192618f6-c968-4edc-bded-d7289abafcae","Type":"ContainerDied","Data":"001e5a6c08f7c8f5b8b701fcf63bc099068571973acc0e558375f807910ffc4b"} Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.924798 4746 scope.go:117] "RemoveContainer" containerID="8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1" Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.942485 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" podStartSLOduration=2.6431593700000002 podStartE2EDuration="15.942454224s" podCreationTimestamp="2026-01-03 03:30:00 +0000 UTC" firstStartedPulling="2026-01-03 03:30:01.655052167 +0000 UTC m=+921.504942472" lastFinishedPulling="2026-01-03 03:30:14.954347021 +0000 UTC m=+934.804237326" observedRunningTime="2026-01-03 03:30:15.94106284 +0000 UTC m=+935.790953155" watchObservedRunningTime="2026-01-03 03:30:15.942454224 +0000 UTC m=+935.792344529" Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.948278 4746 scope.go:117] "RemoveContainer" containerID="bbd4ba49ededcb085516e89a2d22015c02cfdc5dcaff1d13b1a66593d5e94a57" Jan 03 03:30:15 crc kubenswrapper[4746]: I0103 03:30:15.991162 4746 scope.go:117] "RemoveContainer" containerID="6539aed87edab5a4195bb9de453dc6ca3bcdafc8ae5645369c0d63fe361b2bc1" Jan 03 03:30:16 crc kubenswrapper[4746]: I0103 03:30:16.049721 4746 scope.go:117] "RemoveContainer" containerID="8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1" Jan 03 03:30:16 crc kubenswrapper[4746]: E0103 03:30:16.054920 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1\": container with ID starting with 8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1 not found: ID does not exist" containerID="8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1" Jan 03 03:30:16 crc kubenswrapper[4746]: I0103 03:30:16.054960 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1"} err="failed to get container status \"8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1\": rpc error: code = NotFound desc = could not find container \"8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1\": container with ID starting with 8c2f4a6f0e4f0d7889a6823230ed7309f1e2810fa3ac1bcbca8933272e3310e1 not found: ID does not exist" Jan 03 03:30:16 crc kubenswrapper[4746]: I0103 03:30:16.054986 4746 scope.go:117] "RemoveContainer" containerID="bbd4ba49ededcb085516e89a2d22015c02cfdc5dcaff1d13b1a66593d5e94a57" Jan 03 03:30:16 crc kubenswrapper[4746]: E0103 03:30:16.055241 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbd4ba49ededcb085516e89a2d22015c02cfdc5dcaff1d13b1a66593d5e94a57\": container with ID starting with bbd4ba49ededcb085516e89a2d22015c02cfdc5dcaff1d13b1a66593d5e94a57 not found: ID does not exist" containerID="bbd4ba49ededcb085516e89a2d22015c02cfdc5dcaff1d13b1a66593d5e94a57" Jan 03 03:30:16 crc kubenswrapper[4746]: I0103 03:30:16.055263 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbd4ba49ededcb085516e89a2d22015c02cfdc5dcaff1d13b1a66593d5e94a57"} err="failed to get container status \"bbd4ba49ededcb085516e89a2d22015c02cfdc5dcaff1d13b1a66593d5e94a57\": rpc error: code = NotFound desc = could not find container \"bbd4ba49ededcb085516e89a2d22015c02cfdc5dcaff1d13b1a66593d5e94a57\": container with ID starting with bbd4ba49ededcb085516e89a2d22015c02cfdc5dcaff1d13b1a66593d5e94a57 not found: ID does not exist" Jan 03 03:30:16 crc kubenswrapper[4746]: I0103 03:30:16.055277 4746 scope.go:117] "RemoveContainer" containerID="6539aed87edab5a4195bb9de453dc6ca3bcdafc8ae5645369c0d63fe361b2bc1" Jan 03 03:30:16 crc kubenswrapper[4746]: E0103 03:30:16.055442 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6539aed87edab5a4195bb9de453dc6ca3bcdafc8ae5645369c0d63fe361b2bc1\": container with ID starting with 6539aed87edab5a4195bb9de453dc6ca3bcdafc8ae5645369c0d63fe361b2bc1 not found: ID does not exist" containerID="6539aed87edab5a4195bb9de453dc6ca3bcdafc8ae5645369c0d63fe361b2bc1" Jan 03 03:30:16 crc kubenswrapper[4746]: I0103 03:30:16.055462 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6539aed87edab5a4195bb9de453dc6ca3bcdafc8ae5645369c0d63fe361b2bc1"} err="failed to get container status \"6539aed87edab5a4195bb9de453dc6ca3bcdafc8ae5645369c0d63fe361b2bc1\": rpc error: code = NotFound desc = could not find container \"6539aed87edab5a4195bb9de453dc6ca3bcdafc8ae5645369c0d63fe361b2bc1\": container with ID starting with 6539aed87edab5a4195bb9de453dc6ca3bcdafc8ae5645369c0d63fe361b2bc1 not found: ID does not exist" Jan 03 03:30:16 crc kubenswrapper[4746]: I0103 03:30:16.065121 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-42wrb"] Jan 03 03:30:16 crc kubenswrapper[4746]: I0103 03:30:16.066632 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-42wrb"] Jan 03 03:30:16 crc kubenswrapper[4746]: I0103 03:30:16.477063 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192618f6-c968-4edc-bded-d7289abafcae" path="/var/lib/kubelet/pods/192618f6-c968-4edc-bded-d7289abafcae/volumes" Jan 03 03:30:16 crc kubenswrapper[4746]: I0103 03:30:16.932374 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:18 crc kubenswrapper[4746]: I0103 03:30:18.953521 4746 generic.go:334] "Generic (PLEG): container finished" podID="3b854ff9-97ba-4cd2-9136-db9e311d5e94" containerID="44137e4d944c2589e25d61eaffc5f039cfa5f51d7812a9a7b7370b5da6a35a69" exitCode=0 Jan 03 03:30:18 crc kubenswrapper[4746]: I0103 03:30:18.953624 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-1" event={"ID":"3b854ff9-97ba-4cd2-9136-db9e311d5e94","Type":"ContainerDied","Data":"44137e4d944c2589e25d61eaffc5f039cfa5f51d7812a9a7b7370b5da6a35a69"} Jan 03 03:30:18 crc kubenswrapper[4746]: I0103 03:30:18.956099 4746 generic.go:334] "Generic (PLEG): container finished" podID="c201de48-9fda-488a-9ca1-d6cb8cc085c5" containerID="2c6b2e1cef5c9b100a1d4e305294bbb713c9fa4c864e922c3ffacbc2d992507f" exitCode=0 Jan 03 03:30:18 crc kubenswrapper[4746]: I0103 03:30:18.956183 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-0" event={"ID":"c201de48-9fda-488a-9ca1-d6cb8cc085c5","Type":"ContainerDied","Data":"2c6b2e1cef5c9b100a1d4e305294bbb713c9fa4c864e922c3ffacbc2d992507f"} Jan 03 03:30:18 crc kubenswrapper[4746]: I0103 03:30:18.963961 4746 generic.go:334] "Generic (PLEG): container finished" podID="0a58aeed-241f-4361-8570-043366a4a146" containerID="5777d2f68feb02bb9241dc831551ce9aec835f696c08c427127f44dde71c2372" exitCode=0 Jan 03 03:30:18 crc kubenswrapper[4746]: I0103 03:30:18.964016 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-2" event={"ID":"0a58aeed-241f-4361-8570-043366a4a146","Type":"ContainerDied","Data":"5777d2f68feb02bb9241dc831551ce9aec835f696c08c427127f44dde71c2372"} Jan 03 03:30:19 crc kubenswrapper[4746]: I0103 03:30:19.972713 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-1" event={"ID":"3b854ff9-97ba-4cd2-9136-db9e311d5e94","Type":"ContainerStarted","Data":"0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318"} Jan 03 03:30:19 crc kubenswrapper[4746]: I0103 03:30:19.974466 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-2" event={"ID":"0a58aeed-241f-4361-8570-043366a4a146","Type":"ContainerStarted","Data":"825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88"} Jan 03 03:30:19 crc kubenswrapper[4746]: I0103 03:30:19.977181 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-0" event={"ID":"c201de48-9fda-488a-9ca1-d6cb8cc085c5","Type":"ContainerStarted","Data":"b9dfb9132a8c309c717cc27f788ee418a82a6964fbb89d7f66834817d1bfa2a8"} Jan 03 03:30:20 crc kubenswrapper[4746]: I0103 03:30:20.021000 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/openstack-galera-0" podStartSLOduration=7.40991809 podStartE2EDuration="21.020982026s" podCreationTimestamp="2026-01-03 03:29:59 +0000 UTC" firstStartedPulling="2026-01-03 03:30:01.495315632 +0000 UTC m=+921.345205937" lastFinishedPulling="2026-01-03 03:30:15.106379548 +0000 UTC m=+934.956269873" observedRunningTime="2026-01-03 03:30:20.018220068 +0000 UTC m=+939.868110373" watchObservedRunningTime="2026-01-03 03:30:20.020982026 +0000 UTC m=+939.870872331" Jan 03 03:30:20 crc kubenswrapper[4746]: I0103 03:30:20.024155 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/openstack-galera-1" podStartSLOduration=7.305914288 podStartE2EDuration="21.024141463s" podCreationTimestamp="2026-01-03 03:29:59 +0000 UTC" firstStartedPulling="2026-01-03 03:30:01.41257701 +0000 UTC m=+921.262467315" lastFinishedPulling="2026-01-03 03:30:15.130804185 +0000 UTC m=+934.980694490" observedRunningTime="2026-01-03 03:30:20.002229477 +0000 UTC m=+939.852119782" watchObservedRunningTime="2026-01-03 03:30:20.024141463 +0000 UTC m=+939.874031768" Jan 03 03:30:20 crc kubenswrapper[4746]: I0103 03:30:20.039083 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/openstack-galera-2" podStartSLOduration=7.477076923 podStartE2EDuration="21.039064298s" podCreationTimestamp="2026-01-03 03:29:59 +0000 UTC" firstStartedPulling="2026-01-03 03:30:01.499629638 +0000 UTC m=+921.349519943" lastFinishedPulling="2026-01-03 03:30:15.061617003 +0000 UTC m=+934.911507318" observedRunningTime="2026-01-03 03:30:20.036527026 +0000 UTC m=+939.886417331" watchObservedRunningTime="2026-01-03 03:30:20.039064298 +0000 UTC m=+939.888954603" Jan 03 03:30:20 crc kubenswrapper[4746]: I0103 03:30:20.809999 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:20 crc kubenswrapper[4746]: I0103 03:30:20.810089 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:20 crc kubenswrapper[4746]: I0103 03:30:20.831814 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:20 crc kubenswrapper[4746]: I0103 03:30:20.832046 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:20 crc kubenswrapper[4746]: I0103 03:30:20.832073 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:20 crc kubenswrapper[4746]: I0103 03:30:20.832124 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:21 crc kubenswrapper[4746]: I0103 03:30:21.450234 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.839917 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/memcached-0"] Jan 03 03:30:25 crc kubenswrapper[4746]: E0103 03:30:25.840755 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f02855-14db-453e-affc-0efbc2538f0b" containerName="collect-profiles" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.840772 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f02855-14db-453e-affc-0efbc2538f0b" containerName="collect-profiles" Jan 03 03:30:25 crc kubenswrapper[4746]: E0103 03:30:25.840791 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192618f6-c968-4edc-bded-d7289abafcae" containerName="registry-server" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.840798 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="192618f6-c968-4edc-bded-d7289abafcae" containerName="registry-server" Jan 03 03:30:25 crc kubenswrapper[4746]: E0103 03:30:25.840807 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192618f6-c968-4edc-bded-d7289abafcae" containerName="extract-content" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.840814 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="192618f6-c968-4edc-bded-d7289abafcae" containerName="extract-content" Jan 03 03:30:25 crc kubenswrapper[4746]: E0103 03:30:25.840834 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192618f6-c968-4edc-bded-d7289abafcae" containerName="extract-utilities" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.840842 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="192618f6-c968-4edc-bded-d7289abafcae" containerName="extract-utilities" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.840982 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="192618f6-c968-4edc-bded-d7289abafcae" containerName="registry-server" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.841000 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f02855-14db-453e-affc-0efbc2538f0b" containerName="collect-profiles" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.841549 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.843817 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"memcached-memcached-dockercfg-p67dp" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.844618 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"barbican-kuttl-tests"/"memcached-config-data" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.859693 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/memcached-0"] Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.939918 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66x2k\" (UniqueName: \"kubernetes.io/projected/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-kube-api-access-66x2k\") pod \"memcached-0\" (UID: \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\") " pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.940016 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-config-data\") pod \"memcached-0\" (UID: \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\") " pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:25 crc kubenswrapper[4746]: I0103 03:30:25.940224 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-kolla-config\") pod \"memcached-0\" (UID: \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\") " pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:26 crc kubenswrapper[4746]: I0103 03:30:26.041820 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-config-data\") pod \"memcached-0\" (UID: \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\") " pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:26 crc kubenswrapper[4746]: I0103 03:30:26.041913 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-kolla-config\") pod \"memcached-0\" (UID: \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\") " pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:26 crc kubenswrapper[4746]: I0103 03:30:26.042027 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66x2k\" (UniqueName: \"kubernetes.io/projected/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-kube-api-access-66x2k\") pod \"memcached-0\" (UID: \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\") " pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:26 crc kubenswrapper[4746]: I0103 03:30:26.042797 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-kolla-config\") pod \"memcached-0\" (UID: \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\") " pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:26 crc kubenswrapper[4746]: I0103 03:30:26.043268 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-config-data\") pod \"memcached-0\" (UID: \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\") " pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:26 crc kubenswrapper[4746]: I0103 03:30:26.063324 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66x2k\" (UniqueName: \"kubernetes.io/projected/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-kube-api-access-66x2k\") pod \"memcached-0\" (UID: \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\") " pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:26 crc kubenswrapper[4746]: I0103 03:30:26.166977 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:26 crc kubenswrapper[4746]: I0103 03:30:26.609384 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/memcached-0"] Jan 03 03:30:27 crc kubenswrapper[4746]: I0103 03:30:27.026169 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/memcached-0" event={"ID":"829fb3d2-d144-42b6-9e2c-493ae34fdf6a","Type":"ContainerStarted","Data":"1333758e74778e8c69f43f13ed72dae4f77dc4ed7dd4e6707a0238f7ff3d3d22"} Jan 03 03:30:28 crc kubenswrapper[4746]: I0103 03:30:28.963221 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.050271 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/memcached-0" event={"ID":"829fb3d2-d144-42b6-9e2c-493ae34fdf6a","Type":"ContainerStarted","Data":"3625c9ce57252a3d32952a5b650a167d0c04ed4aa1946b9032825c6e9970035c"} Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.050565 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.056899 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.068248 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/memcached-0" podStartSLOduration=2.176469436 podStartE2EDuration="4.068227527s" podCreationTimestamp="2026-01-03 03:30:25 +0000 UTC" firstStartedPulling="2026-01-03 03:30:26.619059632 +0000 UTC m=+946.468949937" lastFinishedPulling="2026-01-03 03:30:28.510817723 +0000 UTC m=+948.360708028" observedRunningTime="2026-01-03 03:30:29.066895534 +0000 UTC m=+948.916785849" watchObservedRunningTime="2026-01-03 03:30:29.068227527 +0000 UTC m=+948.918117842" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.128856 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-v7lbc"] Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.130135 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.132165 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-t7ks8" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.142393 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-v7lbc"] Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.308716 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8ftk\" (UniqueName: \"kubernetes.io/projected/2f26de35-b326-4263-9bb0-945d8ece35fb-kube-api-access-q8ftk\") pod \"rabbitmq-cluster-operator-index-v7lbc\" (UID: \"2f26de35-b326-4263-9bb0-945d8ece35fb\") " pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.409531 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8ftk\" (UniqueName: \"kubernetes.io/projected/2f26de35-b326-4263-9bb0-945d8ece35fb-kube-api-access-q8ftk\") pod \"rabbitmq-cluster-operator-index-v7lbc\" (UID: \"2f26de35-b326-4263-9bb0-945d8ece35fb\") " pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.428983 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8ftk\" (UniqueName: \"kubernetes.io/projected/2f26de35-b326-4263-9bb0-945d8ece35fb-kube-api-access-q8ftk\") pod \"rabbitmq-cluster-operator-index-v7lbc\" (UID: \"2f26de35-b326-4263-9bb0-945d8ece35fb\") " pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.462392 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.593556 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/root-account-create-update-q7xkl"] Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.600984 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/root-account-create-update-q7xkl" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.604407 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.606283 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/root-account-create-update-q7xkl"] Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.717549 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9-operator-scripts\") pod \"root-account-create-update-q7xkl\" (UID: \"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9\") " pod="barbican-kuttl-tests/root-account-create-update-q7xkl" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.717635 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxhdm\" (UniqueName: \"kubernetes.io/projected/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9-kube-api-access-fxhdm\") pod \"root-account-create-update-q7xkl\" (UID: \"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9\") " pod="barbican-kuttl-tests/root-account-create-update-q7xkl" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.819087 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9-operator-scripts\") pod \"root-account-create-update-q7xkl\" (UID: \"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9\") " pod="barbican-kuttl-tests/root-account-create-update-q7xkl" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.819574 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxhdm\" (UniqueName: \"kubernetes.io/projected/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9-kube-api-access-fxhdm\") pod \"root-account-create-update-q7xkl\" (UID: \"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9\") " pod="barbican-kuttl-tests/root-account-create-update-q7xkl" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.819903 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9-operator-scripts\") pod \"root-account-create-update-q7xkl\" (UID: \"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9\") " pod="barbican-kuttl-tests/root-account-create-update-q7xkl" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.831299 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-v7lbc"] Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.851639 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxhdm\" (UniqueName: \"kubernetes.io/projected/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9-kube-api-access-fxhdm\") pod \"root-account-create-update-q7xkl\" (UID: \"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9\") " pod="barbican-kuttl-tests/root-account-create-update-q7xkl" Jan 03 03:30:29 crc kubenswrapper[4746]: I0103 03:30:29.957467 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/root-account-create-update-q7xkl" Jan 03 03:30:30 crc kubenswrapper[4746]: I0103 03:30:30.063673 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" event={"ID":"2f26de35-b326-4263-9bb0-945d8ece35fb","Type":"ContainerStarted","Data":"b96c9c86d896802dd6ac21af459c298f806863c5e9936ffe262f3155b8b8886a"} Jan 03 03:30:30 crc kubenswrapper[4746]: I0103 03:30:30.570380 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/root-account-create-update-q7xkl"] Jan 03 03:30:30 crc kubenswrapper[4746]: W0103 03:30:30.576774 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e84b37b_8dc8_4a4c_bb3d_2708cf7d56e9.slice/crio-366dfc4445e8a2ff7c46934a6664314cd224e7236a98037b4e94c2ee81311e39 WatchSource:0}: Error finding container 366dfc4445e8a2ff7c46934a6664314cd224e7236a98037b4e94c2ee81311e39: Status 404 returned error can't find the container with id 366dfc4445e8a2ff7c46934a6664314cd224e7236a98037b4e94c2ee81311e39 Jan 03 03:30:31 crc kubenswrapper[4746]: I0103 03:30:31.086166 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/root-account-create-update-q7xkl" event={"ID":"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9","Type":"ContainerStarted","Data":"93a19824f292d0a0ffcb6285653829aa8d58a2daf6e5989893ec7768301d2540"} Jan 03 03:30:31 crc kubenswrapper[4746]: I0103 03:30:31.086544 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/root-account-create-update-q7xkl" event={"ID":"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9","Type":"ContainerStarted","Data":"366dfc4445e8a2ff7c46934a6664314cd224e7236a98037b4e94c2ee81311e39"} Jan 03 03:30:31 crc kubenswrapper[4746]: I0103 03:30:31.112201 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/root-account-create-update-q7xkl" podStartSLOduration=2.112183348 podStartE2EDuration="2.112183348s" podCreationTimestamp="2026-01-03 03:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:30:31.110038425 +0000 UTC m=+950.959928730" watchObservedRunningTime="2026-01-03 03:30:31.112183348 +0000 UTC m=+950.962073653" Jan 03 03:30:32 crc kubenswrapper[4746]: I0103 03:30:32.092732 4746 generic.go:334] "Generic (PLEG): container finished" podID="7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9" containerID="93a19824f292d0a0ffcb6285653829aa8d58a2daf6e5989893ec7768301d2540" exitCode=0 Jan 03 03:30:32 crc kubenswrapper[4746]: I0103 03:30:32.092800 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/root-account-create-update-q7xkl" event={"ID":"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9","Type":"ContainerDied","Data":"93a19824f292d0a0ffcb6285653829aa8d58a2daf6e5989893ec7768301d2540"} Jan 03 03:30:36 crc kubenswrapper[4746]: I0103 03:30:36.168824 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="barbican-kuttl-tests/memcached-0" Jan 03 03:30:37 crc kubenswrapper[4746]: I0103 03:30:37.579159 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/root-account-create-update-q7xkl" Jan 03 03:30:37 crc kubenswrapper[4746]: I0103 03:30:37.742639 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9-operator-scripts\") pod \"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9\" (UID: \"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9\") " Jan 03 03:30:37 crc kubenswrapper[4746]: I0103 03:30:37.743121 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxhdm\" (UniqueName: \"kubernetes.io/projected/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9-kube-api-access-fxhdm\") pod \"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9\" (UID: \"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9\") " Jan 03 03:30:37 crc kubenswrapper[4746]: I0103 03:30:37.743885 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9" (UID: "7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:30:37 crc kubenswrapper[4746]: I0103 03:30:37.752958 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9-kube-api-access-fxhdm" (OuterVolumeSpecName: "kube-api-access-fxhdm") pod "7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9" (UID: "7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9"). InnerVolumeSpecName "kube-api-access-fxhdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:30:37 crc kubenswrapper[4746]: I0103 03:30:37.845305 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:30:37 crc kubenswrapper[4746]: I0103 03:30:37.845760 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxhdm\" (UniqueName: \"kubernetes.io/projected/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9-kube-api-access-fxhdm\") on node \"crc\" DevicePath \"\"" Jan 03 03:30:38 crc kubenswrapper[4746]: I0103 03:30:38.143551 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/root-account-create-update-q7xkl" event={"ID":"7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9","Type":"ContainerDied","Data":"366dfc4445e8a2ff7c46934a6664314cd224e7236a98037b4e94c2ee81311e39"} Jan 03 03:30:38 crc kubenswrapper[4746]: I0103 03:30:38.143610 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="366dfc4445e8a2ff7c46934a6664314cd224e7236a98037b4e94c2ee81311e39" Jan 03 03:30:38 crc kubenswrapper[4746]: I0103 03:30:38.143615 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/root-account-create-update-q7xkl" Jan 03 03:30:39 crc kubenswrapper[4746]: I0103 03:30:39.151156 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" event={"ID":"2f26de35-b326-4263-9bb0-945d8ece35fb","Type":"ContainerStarted","Data":"04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58"} Jan 03 03:30:39 crc kubenswrapper[4746]: I0103 03:30:39.165537 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" podStartSLOduration=1.7149072570000001 podStartE2EDuration="10.165521665s" podCreationTimestamp="2026-01-03 03:30:29 +0000 UTC" firstStartedPulling="2026-01-03 03:30:29.866856638 +0000 UTC m=+949.716746943" lastFinishedPulling="2026-01-03 03:30:38.317471046 +0000 UTC m=+958.167361351" observedRunningTime="2026-01-03 03:30:39.165364302 +0000 UTC m=+959.015254607" watchObservedRunningTime="2026-01-03 03:30:39.165521665 +0000 UTC m=+959.015411970" Jan 03 03:30:39 crc kubenswrapper[4746]: I0103 03:30:39.463189 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" Jan 03 03:30:39 crc kubenswrapper[4746]: I0103 03:30:39.463345 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" Jan 03 03:30:39 crc kubenswrapper[4746]: I0103 03:30:39.505832 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" Jan 03 03:30:40 crc kubenswrapper[4746]: I0103 03:30:40.928179 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="barbican-kuttl-tests/openstack-galera-2" podUID="0a58aeed-241f-4361-8570-043366a4a146" containerName="galera" probeResult="failure" output=< Jan 03 03:30:40 crc kubenswrapper[4746]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 03 03:30:40 crc kubenswrapper[4746]: > Jan 03 03:30:43 crc kubenswrapper[4746]: I0103 03:30:43.263892 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:43 crc kubenswrapper[4746]: I0103 03:30:43.377481 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:30:49 crc kubenswrapper[4746]: I0103 03:30:49.653488 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" Jan 03 03:30:50 crc kubenswrapper[4746]: I0103 03:30:50.630636 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:30:50 crc kubenswrapper[4746]: I0103 03:30:50.725706 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.367767 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2"] Jan 03 03:31:02 crc kubenswrapper[4746]: E0103 03:31:02.368588 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9" containerName="mariadb-account-create-update" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.368603 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9" containerName="mariadb-account-create-update" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.368759 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9" containerName="mariadb-account-create-update" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.369820 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.378131 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hpjh5" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.384846 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2"] Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.403450 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2\" (UID: \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.403509 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2\" (UID: \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.403570 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4tjg\" (UniqueName: \"kubernetes.io/projected/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-kube-api-access-m4tjg\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2\" (UID: \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.504895 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2\" (UID: \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.504953 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2\" (UID: \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.505054 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4tjg\" (UniqueName: \"kubernetes.io/projected/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-kube-api-access-m4tjg\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2\" (UID: \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.505535 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2\" (UID: \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.505795 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2\" (UID: \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.532241 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4tjg\" (UniqueName: \"kubernetes.io/projected/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-kube-api-access-m4tjg\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2\" (UID: \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:02 crc kubenswrapper[4746]: I0103 03:31:02.688376 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:03 crc kubenswrapper[4746]: I0103 03:31:03.135847 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2"] Jan 03 03:31:03 crc kubenswrapper[4746]: W0103 03:31:03.142367 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03e89f99_dcd1_4eb2_9a47_412b5e25ffa6.slice/crio-fa7f5f56e3f2af7667d690b04d16c5e81227709726d5853e4d1cc7b206404fe8 WatchSource:0}: Error finding container fa7f5f56e3f2af7667d690b04d16c5e81227709726d5853e4d1cc7b206404fe8: Status 404 returned error can't find the container with id fa7f5f56e3f2af7667d690b04d16c5e81227709726d5853e4d1cc7b206404fe8 Jan 03 03:31:03 crc kubenswrapper[4746]: I0103 03:31:03.362884 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" event={"ID":"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6","Type":"ContainerStarted","Data":"1a982ac140c728ca35d38ef8d58de515175fa0997736c21cc29e22ccd2f89f9b"} Jan 03 03:31:03 crc kubenswrapper[4746]: I0103 03:31:03.363218 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" event={"ID":"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6","Type":"ContainerStarted","Data":"fa7f5f56e3f2af7667d690b04d16c5e81227709726d5853e4d1cc7b206404fe8"} Jan 03 03:31:04 crc kubenswrapper[4746]: I0103 03:31:04.370738 4746 generic.go:334] "Generic (PLEG): container finished" podID="03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" containerID="1a982ac140c728ca35d38ef8d58de515175fa0997736c21cc29e22ccd2f89f9b" exitCode=0 Jan 03 03:31:04 crc kubenswrapper[4746]: I0103 03:31:04.370790 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" event={"ID":"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6","Type":"ContainerDied","Data":"1a982ac140c728ca35d38ef8d58de515175fa0997736c21cc29e22ccd2f89f9b"} Jan 03 03:31:05 crc kubenswrapper[4746]: I0103 03:31:05.377818 4746 generic.go:334] "Generic (PLEG): container finished" podID="03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" containerID="6705f27afb135cb7e409bc3fe98b5791a7a410b5e88eaddecc1365eb94c26b18" exitCode=0 Jan 03 03:31:05 crc kubenswrapper[4746]: I0103 03:31:05.377865 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" event={"ID":"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6","Type":"ContainerDied","Data":"6705f27afb135cb7e409bc3fe98b5791a7a410b5e88eaddecc1365eb94c26b18"} Jan 03 03:31:06 crc kubenswrapper[4746]: I0103 03:31:06.388793 4746 generic.go:334] "Generic (PLEG): container finished" podID="03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" containerID="4dc8b346ad00c70fe7eba5dddfdaa8e986bc08df16b09d10f2357b5f4cd7287e" exitCode=0 Jan 03 03:31:06 crc kubenswrapper[4746]: I0103 03:31:06.388849 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" event={"ID":"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6","Type":"ContainerDied","Data":"4dc8b346ad00c70fe7eba5dddfdaa8e986bc08df16b09d10f2357b5f4cd7287e"} Jan 03 03:31:07 crc kubenswrapper[4746]: I0103 03:31:07.759849 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:07 crc kubenswrapper[4746]: I0103 03:31:07.886429 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4tjg\" (UniqueName: \"kubernetes.io/projected/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-kube-api-access-m4tjg\") pod \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\" (UID: \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\") " Jan 03 03:31:07 crc kubenswrapper[4746]: I0103 03:31:07.886510 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-bundle\") pod \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\" (UID: \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\") " Jan 03 03:31:07 crc kubenswrapper[4746]: I0103 03:31:07.886557 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-util\") pod \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\" (UID: \"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6\") " Jan 03 03:31:07 crc kubenswrapper[4746]: I0103 03:31:07.887351 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-bundle" (OuterVolumeSpecName: "bundle") pod "03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" (UID: "03e89f99-dcd1-4eb2-9a47-412b5e25ffa6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:31:07 crc kubenswrapper[4746]: I0103 03:31:07.892731 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-kube-api-access-m4tjg" (OuterVolumeSpecName: "kube-api-access-m4tjg") pod "03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" (UID: "03e89f99-dcd1-4eb2-9a47-412b5e25ffa6"). InnerVolumeSpecName "kube-api-access-m4tjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:31:07 crc kubenswrapper[4746]: I0103 03:31:07.899534 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-util" (OuterVolumeSpecName: "util") pod "03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" (UID: "03e89f99-dcd1-4eb2-9a47-412b5e25ffa6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:31:07 crc kubenswrapper[4746]: I0103 03:31:07.988014 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4tjg\" (UniqueName: \"kubernetes.io/projected/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-kube-api-access-m4tjg\") on node \"crc\" DevicePath \"\"" Jan 03 03:31:07 crc kubenswrapper[4746]: I0103 03:31:07.988058 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:31:07 crc kubenswrapper[4746]: I0103 03:31:07.988073 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6-util\") on node \"crc\" DevicePath \"\"" Jan 03 03:31:08 crc kubenswrapper[4746]: I0103 03:31:08.409493 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" event={"ID":"03e89f99-dcd1-4eb2-9a47-412b5e25ffa6","Type":"ContainerDied","Data":"fa7f5f56e3f2af7667d690b04d16c5e81227709726d5853e4d1cc7b206404fe8"} Jan 03 03:31:08 crc kubenswrapper[4746]: I0103 03:31:08.409565 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa7f5f56e3f2af7667d690b04d16c5e81227709726d5853e4d1cc7b206404fe8" Jan 03 03:31:08 crc kubenswrapper[4746]: I0103 03:31:08.409606 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2" Jan 03 03:31:16 crc kubenswrapper[4746]: I0103 03:31:16.777568 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp"] Jan 03 03:31:16 crc kubenswrapper[4746]: E0103 03:31:16.778412 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" containerName="util" Jan 03 03:31:16 crc kubenswrapper[4746]: I0103 03:31:16.778428 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" containerName="util" Jan 03 03:31:16 crc kubenswrapper[4746]: E0103 03:31:16.778448 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" containerName="pull" Jan 03 03:31:16 crc kubenswrapper[4746]: I0103 03:31:16.778456 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" containerName="pull" Jan 03 03:31:16 crc kubenswrapper[4746]: E0103 03:31:16.778480 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" containerName="extract" Jan 03 03:31:16 crc kubenswrapper[4746]: I0103 03:31:16.778490 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" containerName="extract" Jan 03 03:31:16 crc kubenswrapper[4746]: I0103 03:31:16.778626 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" containerName="extract" Jan 03 03:31:16 crc kubenswrapper[4746]: I0103 03:31:16.779212 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" Jan 03 03:31:16 crc kubenswrapper[4746]: I0103 03:31:16.781587 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-9ckfh" Jan 03 03:31:16 crc kubenswrapper[4746]: I0103 03:31:16.796894 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp"] Jan 03 03:31:16 crc kubenswrapper[4746]: I0103 03:31:16.939890 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46jtm\" (UniqueName: \"kubernetes.io/projected/61c330cc-8ee3-478a-8b0d-11170df356bf-kube-api-access-46jtm\") pod \"rabbitmq-cluster-operator-779fc9694b-vc5fp\" (UID: \"61c330cc-8ee3-478a-8b0d-11170df356bf\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.041510 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46jtm\" (UniqueName: \"kubernetes.io/projected/61c330cc-8ee3-478a-8b0d-11170df356bf-kube-api-access-46jtm\") pod \"rabbitmq-cluster-operator-779fc9694b-vc5fp\" (UID: \"61c330cc-8ee3-478a-8b0d-11170df356bf\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.060213 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46jtm\" (UniqueName: \"kubernetes.io/projected/61c330cc-8ee3-478a-8b0d-11170df356bf-kube-api-access-46jtm\") pod \"rabbitmq-cluster-operator-779fc9694b-vc5fp\" (UID: \"61c330cc-8ee3-478a-8b0d-11170df356bf\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.139879 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.363076 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp"] Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.463026 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" event={"ID":"61c330cc-8ee3-478a-8b0d-11170df356bf","Type":"ContainerStarted","Data":"15d9f6e970b6e796b6ff209cc877f625ab9a9f96dcaa717996d38af3466cdcca"} Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.729773 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4m4k7"] Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.731258 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.743099 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4m4k7"] Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.852090 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj7ls\" (UniqueName: \"kubernetes.io/projected/84767a0c-98eb-4e0d-8882-d22d3f00baea-kube-api-access-kj7ls\") pod \"community-operators-4m4k7\" (UID: \"84767a0c-98eb-4e0d-8882-d22d3f00baea\") " pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.852298 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84767a0c-98eb-4e0d-8882-d22d3f00baea-utilities\") pod \"community-operators-4m4k7\" (UID: \"84767a0c-98eb-4e0d-8882-d22d3f00baea\") " pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.852341 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84767a0c-98eb-4e0d-8882-d22d3f00baea-catalog-content\") pod \"community-operators-4m4k7\" (UID: \"84767a0c-98eb-4e0d-8882-d22d3f00baea\") " pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.953536 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84767a0c-98eb-4e0d-8882-d22d3f00baea-utilities\") pod \"community-operators-4m4k7\" (UID: \"84767a0c-98eb-4e0d-8882-d22d3f00baea\") " pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.953594 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84767a0c-98eb-4e0d-8882-d22d3f00baea-catalog-content\") pod \"community-operators-4m4k7\" (UID: \"84767a0c-98eb-4e0d-8882-d22d3f00baea\") " pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.954278 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84767a0c-98eb-4e0d-8882-d22d3f00baea-utilities\") pod \"community-operators-4m4k7\" (UID: \"84767a0c-98eb-4e0d-8882-d22d3f00baea\") " pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.954329 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj7ls\" (UniqueName: \"kubernetes.io/projected/84767a0c-98eb-4e0d-8882-d22d3f00baea-kube-api-access-kj7ls\") pod \"community-operators-4m4k7\" (UID: \"84767a0c-98eb-4e0d-8882-d22d3f00baea\") " pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.954298 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84767a0c-98eb-4e0d-8882-d22d3f00baea-catalog-content\") pod \"community-operators-4m4k7\" (UID: \"84767a0c-98eb-4e0d-8882-d22d3f00baea\") " pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:17 crc kubenswrapper[4746]: I0103 03:31:17.971332 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj7ls\" (UniqueName: \"kubernetes.io/projected/84767a0c-98eb-4e0d-8882-d22d3f00baea-kube-api-access-kj7ls\") pod \"community-operators-4m4k7\" (UID: \"84767a0c-98eb-4e0d-8882-d22d3f00baea\") " pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:18 crc kubenswrapper[4746]: I0103 03:31:18.052035 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:18 crc kubenswrapper[4746]: I0103 03:31:18.545841 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4m4k7"] Jan 03 03:31:18 crc kubenswrapper[4746]: W0103 03:31:18.569701 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84767a0c_98eb_4e0d_8882_d22d3f00baea.slice/crio-1e1fbedc9e1a34943e0ac732d2597d11a282b0fb2a99f28c42d65d6d693eec82 WatchSource:0}: Error finding container 1e1fbedc9e1a34943e0ac732d2597d11a282b0fb2a99f28c42d65d6d693eec82: Status 404 returned error can't find the container with id 1e1fbedc9e1a34943e0ac732d2597d11a282b0fb2a99f28c42d65d6d693eec82 Jan 03 03:31:19 crc kubenswrapper[4746]: I0103 03:31:19.478460 4746 generic.go:334] "Generic (PLEG): container finished" podID="84767a0c-98eb-4e0d-8882-d22d3f00baea" containerID="ead469f04d4f9ad8819ffe706319d4dc1536ade951cf3288fea5c61cc81e7d1e" exitCode=0 Jan 03 03:31:19 crc kubenswrapper[4746]: I0103 03:31:19.478763 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m4k7" event={"ID":"84767a0c-98eb-4e0d-8882-d22d3f00baea","Type":"ContainerDied","Data":"ead469f04d4f9ad8819ffe706319d4dc1536ade951cf3288fea5c61cc81e7d1e"} Jan 03 03:31:19 crc kubenswrapper[4746]: I0103 03:31:19.478794 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m4k7" event={"ID":"84767a0c-98eb-4e0d-8882-d22d3f00baea","Type":"ContainerStarted","Data":"1e1fbedc9e1a34943e0ac732d2597d11a282b0fb2a99f28c42d65d6d693eec82"} Jan 03 03:31:22 crc kubenswrapper[4746]: I0103 03:31:22.540040 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" event={"ID":"61c330cc-8ee3-478a-8b0d-11170df356bf","Type":"ContainerStarted","Data":"c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c"} Jan 03 03:31:22 crc kubenswrapper[4746]: I0103 03:31:22.543196 4746 generic.go:334] "Generic (PLEG): container finished" podID="84767a0c-98eb-4e0d-8882-d22d3f00baea" containerID="ce7523f276e76a3e38aa6536638bdf98a593c997c145a6f3a5fbc7e32ffc052c" exitCode=0 Jan 03 03:31:22 crc kubenswrapper[4746]: I0103 03:31:22.543460 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m4k7" event={"ID":"84767a0c-98eb-4e0d-8882-d22d3f00baea","Type":"ContainerDied","Data":"ce7523f276e76a3e38aa6536638bdf98a593c997c145a6f3a5fbc7e32ffc052c"} Jan 03 03:31:22 crc kubenswrapper[4746]: I0103 03:31:22.558027 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" podStartSLOduration=2.214929435 podStartE2EDuration="6.558012951s" podCreationTimestamp="2026-01-03 03:31:16 +0000 UTC" firstStartedPulling="2026-01-03 03:31:17.374572312 +0000 UTC m=+997.224462617" lastFinishedPulling="2026-01-03 03:31:21.717655828 +0000 UTC m=+1001.567546133" observedRunningTime="2026-01-03 03:31:22.556889494 +0000 UTC m=+1002.406779839" watchObservedRunningTime="2026-01-03 03:31:22.558012951 +0000 UTC m=+1002.407903266" Jan 03 03:31:23 crc kubenswrapper[4746]: I0103 03:31:23.552327 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m4k7" event={"ID":"84767a0c-98eb-4e0d-8882-d22d3f00baea","Type":"ContainerStarted","Data":"5fa5e5f6fc0b4ee1a170be6a15e2240337e8785ed281f6a031cd8ee36be6c071"} Jan 03 03:31:23 crc kubenswrapper[4746]: I0103 03:31:23.574527 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4m4k7" podStartSLOduration=3.091987975 podStartE2EDuration="6.574501942s" podCreationTimestamp="2026-01-03 03:31:17 +0000 UTC" firstStartedPulling="2026-01-03 03:31:19.480823763 +0000 UTC m=+999.330714068" lastFinishedPulling="2026-01-03 03:31:22.96333773 +0000 UTC m=+1002.813228035" observedRunningTime="2026-01-03 03:31:23.568605897 +0000 UTC m=+1003.418496202" watchObservedRunningTime="2026-01-03 03:31:23.574501942 +0000 UTC m=+1003.424392247" Jan 03 03:31:28 crc kubenswrapper[4746]: I0103 03:31:28.052516 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:28 crc kubenswrapper[4746]: I0103 03:31:28.053152 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:28 crc kubenswrapper[4746]: I0103 03:31:28.121735 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:28 crc kubenswrapper[4746]: I0103 03:31:28.662160 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.064364 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/rabbitmq-server-0"] Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.065580 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.067666 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"rabbitmq-default-user" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.068034 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"rabbitmq-server-dockercfg-rt5j6" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.068269 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"barbican-kuttl-tests"/"rabbitmq-server-conf" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.068299 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"barbican-kuttl-tests"/"rabbitmq-plugins-conf" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.068464 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.128397 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.128491 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p99gz\" (UniqueName: \"kubernetes.io/projected/2615393c-ec92-4378-9eb7-4a5043a44bb6-kube-api-access-p99gz\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.128538 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2615393c-ec92-4378-9eb7-4a5043a44bb6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.128571 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.128604 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.128644 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.128852 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2615393c-ec92-4378-9eb7-4a5043a44bb6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.128890 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2615393c-ec92-4378-9eb7-4a5043a44bb6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.159871 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/rabbitmq-server-0"] Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.230309 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2615393c-ec92-4378-9eb7-4a5043a44bb6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.230345 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2615393c-ec92-4378-9eb7-4a5043a44bb6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.230373 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.230408 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p99gz\" (UniqueName: \"kubernetes.io/projected/2615393c-ec92-4378-9eb7-4a5043a44bb6-kube-api-access-p99gz\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.230431 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2615393c-ec92-4378-9eb7-4a5043a44bb6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.230450 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.230468 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.230489 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.231419 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.231706 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2615393c-ec92-4378-9eb7-4a5043a44bb6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.231737 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.233633 4746 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.233740 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6faaf1e14e1a02cf74ddbb416d94f0253b21dcd314b010e1b7ed2844780e306d/globalmount\"" pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.236263 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2615393c-ec92-4378-9eb7-4a5043a44bb6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.238179 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2615393c-ec92-4378-9eb7-4a5043a44bb6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.238582 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.249400 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p99gz\" (UniqueName: \"kubernetes.io/projected/2615393c-ec92-4378-9eb7-4a5043a44bb6-kube-api-access-p99gz\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.255118 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\") pod \"rabbitmq-server-0\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.390488 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:31:29 crc kubenswrapper[4746]: I0103 03:31:29.690298 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/rabbitmq-server-0"] Jan 03 03:31:29 crc kubenswrapper[4746]: W0103 03:31:29.703359 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2615393c_ec92_4378_9eb7_4a5043a44bb6.slice/crio-0ec70d0d0f544f8d31fe675a33ce9ded8e4c171cb88314292f0a835c7429b5dd WatchSource:0}: Error finding container 0ec70d0d0f544f8d31fe675a33ce9ded8e4c171cb88314292f0a835c7429b5dd: Status 404 returned error can't find the container with id 0ec70d0d0f544f8d31fe675a33ce9ded8e4c171cb88314292f0a835c7429b5dd Jan 03 03:31:30 crc kubenswrapper[4746]: I0103 03:31:30.599606 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/rabbitmq-server-0" event={"ID":"2615393c-ec92-4378-9eb7-4a5043a44bb6","Type":"ContainerStarted","Data":"0ec70d0d0f544f8d31fe675a33ce9ded8e4c171cb88314292f0a835c7429b5dd"} Jan 03 03:31:31 crc kubenswrapper[4746]: I0103 03:31:31.522138 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-nxg77"] Jan 03 03:31:31 crc kubenswrapper[4746]: I0103 03:31:31.523354 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-nxg77" Jan 03 03:31:31 crc kubenswrapper[4746]: I0103 03:31:31.526761 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-gww4g" Jan 03 03:31:31 crc kubenswrapper[4746]: I0103 03:31:31.536635 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-nxg77"] Jan 03 03:31:31 crc kubenswrapper[4746]: I0103 03:31:31.559183 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n94nk\" (UniqueName: \"kubernetes.io/projected/b0b84bd0-171e-4129-b0d4-42a68cd8075b-kube-api-access-n94nk\") pod \"keystone-operator-index-nxg77\" (UID: \"b0b84bd0-171e-4129-b0d4-42a68cd8075b\") " pod="openstack-operators/keystone-operator-index-nxg77" Jan 03 03:31:31 crc kubenswrapper[4746]: I0103 03:31:31.660357 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n94nk\" (UniqueName: \"kubernetes.io/projected/b0b84bd0-171e-4129-b0d4-42a68cd8075b-kube-api-access-n94nk\") pod \"keystone-operator-index-nxg77\" (UID: \"b0b84bd0-171e-4129-b0d4-42a68cd8075b\") " pod="openstack-operators/keystone-operator-index-nxg77" Jan 03 03:31:31 crc kubenswrapper[4746]: I0103 03:31:31.693177 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n94nk\" (UniqueName: \"kubernetes.io/projected/b0b84bd0-171e-4129-b0d4-42a68cd8075b-kube-api-access-n94nk\") pod \"keystone-operator-index-nxg77\" (UID: \"b0b84bd0-171e-4129-b0d4-42a68cd8075b\") " pod="openstack-operators/keystone-operator-index-nxg77" Jan 03 03:31:31 crc kubenswrapper[4746]: I0103 03:31:31.858533 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-nxg77" Jan 03 03:31:32 crc kubenswrapper[4746]: I0103 03:31:32.406924 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-nxg77"] Jan 03 03:31:32 crc kubenswrapper[4746]: I0103 03:31:32.614729 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-nxg77" event={"ID":"b0b84bd0-171e-4129-b0d4-42a68cd8075b","Type":"ContainerStarted","Data":"04668fadb52f8fe7a612c0d09034747d8529cc1701c64b1b2e67fdd17b8dce34"} Jan 03 03:31:32 crc kubenswrapper[4746]: I0103 03:31:32.917152 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4m4k7"] Jan 03 03:31:32 crc kubenswrapper[4746]: I0103 03:31:32.917381 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4m4k7" podUID="84767a0c-98eb-4e0d-8882-d22d3f00baea" containerName="registry-server" containerID="cri-o://5fa5e5f6fc0b4ee1a170be6a15e2240337e8785ed281f6a031cd8ee36be6c071" gracePeriod=2 Jan 03 03:31:33 crc kubenswrapper[4746]: I0103 03:31:33.621980 4746 generic.go:334] "Generic (PLEG): container finished" podID="84767a0c-98eb-4e0d-8882-d22d3f00baea" containerID="5fa5e5f6fc0b4ee1a170be6a15e2240337e8785ed281f6a031cd8ee36be6c071" exitCode=0 Jan 03 03:31:33 crc kubenswrapper[4746]: I0103 03:31:33.622045 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m4k7" event={"ID":"84767a0c-98eb-4e0d-8882-d22d3f00baea","Type":"ContainerDied","Data":"5fa5e5f6fc0b4ee1a170be6a15e2240337e8785ed281f6a031cd8ee36be6c071"} Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.429420 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.545945 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84767a0c-98eb-4e0d-8882-d22d3f00baea-catalog-content\") pod \"84767a0c-98eb-4e0d-8882-d22d3f00baea\" (UID: \"84767a0c-98eb-4e0d-8882-d22d3f00baea\") " Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.546028 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj7ls\" (UniqueName: \"kubernetes.io/projected/84767a0c-98eb-4e0d-8882-d22d3f00baea-kube-api-access-kj7ls\") pod \"84767a0c-98eb-4e0d-8882-d22d3f00baea\" (UID: \"84767a0c-98eb-4e0d-8882-d22d3f00baea\") " Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.546094 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84767a0c-98eb-4e0d-8882-d22d3f00baea-utilities\") pod \"84767a0c-98eb-4e0d-8882-d22d3f00baea\" (UID: \"84767a0c-98eb-4e0d-8882-d22d3f00baea\") " Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.547336 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84767a0c-98eb-4e0d-8882-d22d3f00baea-utilities" (OuterVolumeSpecName: "utilities") pod "84767a0c-98eb-4e0d-8882-d22d3f00baea" (UID: "84767a0c-98eb-4e0d-8882-d22d3f00baea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.570134 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84767a0c-98eb-4e0d-8882-d22d3f00baea-kube-api-access-kj7ls" (OuterVolumeSpecName: "kube-api-access-kj7ls") pod "84767a0c-98eb-4e0d-8882-d22d3f00baea" (UID: "84767a0c-98eb-4e0d-8882-d22d3f00baea"). InnerVolumeSpecName "kube-api-access-kj7ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.616129 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84767a0c-98eb-4e0d-8882-d22d3f00baea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84767a0c-98eb-4e0d-8882-d22d3f00baea" (UID: "84767a0c-98eb-4e0d-8882-d22d3f00baea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.638965 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4m4k7" event={"ID":"84767a0c-98eb-4e0d-8882-d22d3f00baea","Type":"ContainerDied","Data":"1e1fbedc9e1a34943e0ac732d2597d11a282b0fb2a99f28c42d65d6d693eec82"} Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.639071 4746 scope.go:117] "RemoveContainer" containerID="5fa5e5f6fc0b4ee1a170be6a15e2240337e8785ed281f6a031cd8ee36be6c071" Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.639024 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4m4k7" Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.647559 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84767a0c-98eb-4e0d-8882-d22d3f00baea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.647615 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj7ls\" (UniqueName: \"kubernetes.io/projected/84767a0c-98eb-4e0d-8882-d22d3f00baea-kube-api-access-kj7ls\") on node \"crc\" DevicePath \"\"" Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.647627 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84767a0c-98eb-4e0d-8882-d22d3f00baea-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.673569 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4m4k7"] Jan 03 03:31:35 crc kubenswrapper[4746]: I0103 03:31:35.679647 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4m4k7"] Jan 03 03:31:36 crc kubenswrapper[4746]: I0103 03:31:36.361743 4746 scope.go:117] "RemoveContainer" containerID="ce7523f276e76a3e38aa6536638bdf98a593c997c145a6f3a5fbc7e32ffc052c" Jan 03 03:31:36 crc kubenswrapper[4746]: I0103 03:31:36.486777 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84767a0c-98eb-4e0d-8882-d22d3f00baea" path="/var/lib/kubelet/pods/84767a0c-98eb-4e0d-8882-d22d3f00baea/volumes" Jan 03 03:31:36 crc kubenswrapper[4746]: I0103 03:31:36.936897 4746 scope.go:117] "RemoveContainer" containerID="ead469f04d4f9ad8819ffe706319d4dc1536ade951cf3288fea5c61cc81e7d1e" Jan 03 03:31:38 crc kubenswrapper[4746]: I0103 03:31:38.661434 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/rabbitmq-server-0" event={"ID":"2615393c-ec92-4378-9eb7-4a5043a44bb6","Type":"ContainerStarted","Data":"fb729afc05393548e7b6f79269ac40a0b64ddd42961f76f7f37e8b165d5b442e"} Jan 03 03:31:49 crc kubenswrapper[4746]: I0103 03:31:49.730805 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-nxg77" event={"ID":"b0b84bd0-171e-4129-b0d4-42a68cd8075b","Type":"ContainerStarted","Data":"5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140"} Jan 03 03:31:49 crc kubenswrapper[4746]: I0103 03:31:49.750874 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-nxg77" podStartSLOduration=1.676203232 podStartE2EDuration="18.750856658s" podCreationTimestamp="2026-01-03 03:31:31 +0000 UTC" firstStartedPulling="2026-01-03 03:31:32.422497339 +0000 UTC m=+1012.272387654" lastFinishedPulling="2026-01-03 03:31:49.497150775 +0000 UTC m=+1029.347041080" observedRunningTime="2026-01-03 03:31:49.744533863 +0000 UTC m=+1029.594424168" watchObservedRunningTime="2026-01-03 03:31:49.750856658 +0000 UTC m=+1029.600746963" Jan 03 03:31:51 crc kubenswrapper[4746]: I0103 03:31:51.858928 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-nxg77" Jan 03 03:31:51 crc kubenswrapper[4746]: I0103 03:31:51.859434 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-nxg77" Jan 03 03:31:51 crc kubenswrapper[4746]: I0103 03:31:51.894343 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-nxg77" Jan 03 03:32:01 crc kubenswrapper[4746]: I0103 03:32:01.898488 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-nxg77" Jan 03 03:32:15 crc kubenswrapper[4746]: I0103 03:32:15.940354 4746 generic.go:334] "Generic (PLEG): container finished" podID="2615393c-ec92-4378-9eb7-4a5043a44bb6" containerID="fb729afc05393548e7b6f79269ac40a0b64ddd42961f76f7f37e8b165d5b442e" exitCode=0 Jan 03 03:32:15 crc kubenswrapper[4746]: I0103 03:32:15.940459 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/rabbitmq-server-0" event={"ID":"2615393c-ec92-4378-9eb7-4a5043a44bb6","Type":"ContainerDied","Data":"fb729afc05393548e7b6f79269ac40a0b64ddd42961f76f7f37e8b165d5b442e"} Jan 03 03:32:16 crc kubenswrapper[4746]: I0103 03:32:16.950742 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/rabbitmq-server-0" event={"ID":"2615393c-ec92-4378-9eb7-4a5043a44bb6","Type":"ContainerStarted","Data":"fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30"} Jan 03 03:32:16 crc kubenswrapper[4746]: I0103 03:32:16.951360 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:32:16 crc kubenswrapper[4746]: I0103 03:32:16.971925 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/rabbitmq-server-0" podStartSLOduration=41.640833452 podStartE2EDuration="48.971901505s" podCreationTimestamp="2026-01-03 03:31:28 +0000 UTC" firstStartedPulling="2026-01-03 03:31:29.706100681 +0000 UTC m=+1009.555990986" lastFinishedPulling="2026-01-03 03:31:37.037168734 +0000 UTC m=+1016.887059039" observedRunningTime="2026-01-03 03:32:16.967138398 +0000 UTC m=+1056.817028713" watchObservedRunningTime="2026-01-03 03:32:16.971901505 +0000 UTC m=+1056.821791810" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.578683 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw"] Jan 03 03:32:18 crc kubenswrapper[4746]: E0103 03:32:18.578940 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84767a0c-98eb-4e0d-8882-d22d3f00baea" containerName="extract-utilities" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.578952 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="84767a0c-98eb-4e0d-8882-d22d3f00baea" containerName="extract-utilities" Jan 03 03:32:18 crc kubenswrapper[4746]: E0103 03:32:18.578983 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84767a0c-98eb-4e0d-8882-d22d3f00baea" containerName="registry-server" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.578988 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="84767a0c-98eb-4e0d-8882-d22d3f00baea" containerName="registry-server" Jan 03 03:32:18 crc kubenswrapper[4746]: E0103 03:32:18.578997 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84767a0c-98eb-4e0d-8882-d22d3f00baea" containerName="extract-content" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.579004 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="84767a0c-98eb-4e0d-8882-d22d3f00baea" containerName="extract-content" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.579113 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="84767a0c-98eb-4e0d-8882-d22d3f00baea" containerName="registry-server" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.579967 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.582850 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hpjh5" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.597141 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw"] Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.716328 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcb2c\" (UniqueName: \"kubernetes.io/projected/94395e3d-0bbb-4c00-b181-51289c280e93-kube-api-access-xcb2c\") pod \"835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw\" (UID: \"94395e3d-0bbb-4c00-b181-51289c280e93\") " pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.716392 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/94395e3d-0bbb-4c00-b181-51289c280e93-util\") pod \"835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw\" (UID: \"94395e3d-0bbb-4c00-b181-51289c280e93\") " pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.716480 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/94395e3d-0bbb-4c00-b181-51289c280e93-bundle\") pod \"835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw\" (UID: \"94395e3d-0bbb-4c00-b181-51289c280e93\") " pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.819074 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcb2c\" (UniqueName: \"kubernetes.io/projected/94395e3d-0bbb-4c00-b181-51289c280e93-kube-api-access-xcb2c\") pod \"835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw\" (UID: \"94395e3d-0bbb-4c00-b181-51289c280e93\") " pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.819575 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/94395e3d-0bbb-4c00-b181-51289c280e93-util\") pod \"835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw\" (UID: \"94395e3d-0bbb-4c00-b181-51289c280e93\") " pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.819649 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/94395e3d-0bbb-4c00-b181-51289c280e93-bundle\") pod \"835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw\" (UID: \"94395e3d-0bbb-4c00-b181-51289c280e93\") " pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.820394 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/94395e3d-0bbb-4c00-b181-51289c280e93-util\") pod \"835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw\" (UID: \"94395e3d-0bbb-4c00-b181-51289c280e93\") " pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.820566 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/94395e3d-0bbb-4c00-b181-51289c280e93-bundle\") pod \"835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw\" (UID: \"94395e3d-0bbb-4c00-b181-51289c280e93\") " pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.848826 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcb2c\" (UniqueName: \"kubernetes.io/projected/94395e3d-0bbb-4c00-b181-51289c280e93-kube-api-access-xcb2c\") pod \"835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw\" (UID: \"94395e3d-0bbb-4c00-b181-51289c280e93\") " pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:18 crc kubenswrapper[4746]: I0103 03:32:18.895067 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:19 crc kubenswrapper[4746]: I0103 03:32:19.345834 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw"] Jan 03 03:32:19 crc kubenswrapper[4746]: W0103 03:32:19.349889 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94395e3d_0bbb_4c00_b181_51289c280e93.slice/crio-9c0016029413596d796648170700a954ce3b77f1ef064b2238c5d28b01967dbe WatchSource:0}: Error finding container 9c0016029413596d796648170700a954ce3b77f1ef064b2238c5d28b01967dbe: Status 404 returned error can't find the container with id 9c0016029413596d796648170700a954ce3b77f1ef064b2238c5d28b01967dbe Jan 03 03:32:19 crc kubenswrapper[4746]: I0103 03:32:19.979960 4746 generic.go:334] "Generic (PLEG): container finished" podID="94395e3d-0bbb-4c00-b181-51289c280e93" containerID="16634cfc4c21204c83d4490d068f78151322b54b426b1c5b465e3f92840d9ee6" exitCode=0 Jan 03 03:32:19 crc kubenswrapper[4746]: I0103 03:32:19.980006 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" event={"ID":"94395e3d-0bbb-4c00-b181-51289c280e93","Type":"ContainerDied","Data":"16634cfc4c21204c83d4490d068f78151322b54b426b1c5b465e3f92840d9ee6"} Jan 03 03:32:19 crc kubenswrapper[4746]: I0103 03:32:19.980094 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" event={"ID":"94395e3d-0bbb-4c00-b181-51289c280e93","Type":"ContainerStarted","Data":"9c0016029413596d796648170700a954ce3b77f1ef064b2238c5d28b01967dbe"} Jan 03 03:32:19 crc kubenswrapper[4746]: I0103 03:32:19.981924 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 03:32:20 crc kubenswrapper[4746]: I0103 03:32:20.988129 4746 generic.go:334] "Generic (PLEG): container finished" podID="94395e3d-0bbb-4c00-b181-51289c280e93" containerID="f59319b371d31369c6beaa6b395565265a4223cc773602385bee9bbef5ea8e65" exitCode=0 Jan 03 03:32:20 crc kubenswrapper[4746]: I0103 03:32:20.988212 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" event={"ID":"94395e3d-0bbb-4c00-b181-51289c280e93","Type":"ContainerDied","Data":"f59319b371d31369c6beaa6b395565265a4223cc773602385bee9bbef5ea8e65"} Jan 03 03:32:21 crc kubenswrapper[4746]: I0103 03:32:21.996640 4746 generic.go:334] "Generic (PLEG): container finished" podID="94395e3d-0bbb-4c00-b181-51289c280e93" containerID="2eded92e414b1dc1538a1e152a7be2606b8569a429298d9492ccfe79da04f679" exitCode=0 Jan 03 03:32:21 crc kubenswrapper[4746]: I0103 03:32:21.996705 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" event={"ID":"94395e3d-0bbb-4c00-b181-51289c280e93","Type":"ContainerDied","Data":"2eded92e414b1dc1538a1e152a7be2606b8569a429298d9492ccfe79da04f679"} Jan 03 03:32:23 crc kubenswrapper[4746]: I0103 03:32:23.252349 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:23 crc kubenswrapper[4746]: I0103 03:32:23.408731 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcb2c\" (UniqueName: \"kubernetes.io/projected/94395e3d-0bbb-4c00-b181-51289c280e93-kube-api-access-xcb2c\") pod \"94395e3d-0bbb-4c00-b181-51289c280e93\" (UID: \"94395e3d-0bbb-4c00-b181-51289c280e93\") " Jan 03 03:32:23 crc kubenswrapper[4746]: I0103 03:32:23.409210 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/94395e3d-0bbb-4c00-b181-51289c280e93-bundle\") pod \"94395e3d-0bbb-4c00-b181-51289c280e93\" (UID: \"94395e3d-0bbb-4c00-b181-51289c280e93\") " Jan 03 03:32:23 crc kubenswrapper[4746]: I0103 03:32:23.409533 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/94395e3d-0bbb-4c00-b181-51289c280e93-util\") pod \"94395e3d-0bbb-4c00-b181-51289c280e93\" (UID: \"94395e3d-0bbb-4c00-b181-51289c280e93\") " Jan 03 03:32:23 crc kubenswrapper[4746]: I0103 03:32:23.410011 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94395e3d-0bbb-4c00-b181-51289c280e93-bundle" (OuterVolumeSpecName: "bundle") pod "94395e3d-0bbb-4c00-b181-51289c280e93" (UID: "94395e3d-0bbb-4c00-b181-51289c280e93"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:32:23 crc kubenswrapper[4746]: I0103 03:32:23.410825 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/94395e3d-0bbb-4c00-b181-51289c280e93-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:32:23 crc kubenswrapper[4746]: I0103 03:32:23.414401 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94395e3d-0bbb-4c00-b181-51289c280e93-kube-api-access-xcb2c" (OuterVolumeSpecName: "kube-api-access-xcb2c") pod "94395e3d-0bbb-4c00-b181-51289c280e93" (UID: "94395e3d-0bbb-4c00-b181-51289c280e93"). InnerVolumeSpecName "kube-api-access-xcb2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:32:23 crc kubenswrapper[4746]: I0103 03:32:23.430814 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94395e3d-0bbb-4c00-b181-51289c280e93-util" (OuterVolumeSpecName: "util") pod "94395e3d-0bbb-4c00-b181-51289c280e93" (UID: "94395e3d-0bbb-4c00-b181-51289c280e93"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:32:23 crc kubenswrapper[4746]: I0103 03:32:23.511837 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/94395e3d-0bbb-4c00-b181-51289c280e93-util\") on node \"crc\" DevicePath \"\"" Jan 03 03:32:23 crc kubenswrapper[4746]: I0103 03:32:23.511880 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcb2c\" (UniqueName: \"kubernetes.io/projected/94395e3d-0bbb-4c00-b181-51289c280e93-kube-api-access-xcb2c\") on node \"crc\" DevicePath \"\"" Jan 03 03:32:24 crc kubenswrapper[4746]: I0103 03:32:24.009940 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" event={"ID":"94395e3d-0bbb-4c00-b181-51289c280e93","Type":"ContainerDied","Data":"9c0016029413596d796648170700a954ce3b77f1ef064b2238c5d28b01967dbe"} Jan 03 03:32:24 crc kubenswrapper[4746]: I0103 03:32:24.010205 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c0016029413596d796648170700a954ce3b77f1ef064b2238c5d28b01967dbe" Jan 03 03:32:24 crc kubenswrapper[4746]: I0103 03:32:24.009986 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw" Jan 03 03:32:29 crc kubenswrapper[4746]: I0103 03:32:29.393934 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:32:31 crc kubenswrapper[4746]: I0103 03:32:31.373770 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:32:31 crc kubenswrapper[4746]: I0103 03:32:31.374126 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.745671 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b"] Jan 03 03:32:32 crc kubenswrapper[4746]: E0103 03:32:32.745933 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94395e3d-0bbb-4c00-b181-51289c280e93" containerName="extract" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.745943 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="94395e3d-0bbb-4c00-b181-51289c280e93" containerName="extract" Jan 03 03:32:32 crc kubenswrapper[4746]: E0103 03:32:32.745954 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94395e3d-0bbb-4c00-b181-51289c280e93" containerName="pull" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.745960 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="94395e3d-0bbb-4c00-b181-51289c280e93" containerName="pull" Jan 03 03:32:32 crc kubenswrapper[4746]: E0103 03:32:32.745976 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94395e3d-0bbb-4c00-b181-51289c280e93" containerName="util" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.745984 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="94395e3d-0bbb-4c00-b181-51289c280e93" containerName="util" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.746103 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="94395e3d-0bbb-4c00-b181-51289c280e93" containerName="extract" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.746528 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.748340 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-q9fnr" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.750894 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.774755 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b"] Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.846252 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6fc0444c-72a4-4172-ab52-4f24f214486d-webhook-cert\") pod \"keystone-operator-controller-manager-698bd85bf4-klp5b\" (UID: \"6fc0444c-72a4-4172-ab52-4f24f214486d\") " pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.846350 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4twfs\" (UniqueName: \"kubernetes.io/projected/6fc0444c-72a4-4172-ab52-4f24f214486d-kube-api-access-4twfs\") pod \"keystone-operator-controller-manager-698bd85bf4-klp5b\" (UID: \"6fc0444c-72a4-4172-ab52-4f24f214486d\") " pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.846456 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6fc0444c-72a4-4172-ab52-4f24f214486d-apiservice-cert\") pod \"keystone-operator-controller-manager-698bd85bf4-klp5b\" (UID: \"6fc0444c-72a4-4172-ab52-4f24f214486d\") " pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.947854 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6fc0444c-72a4-4172-ab52-4f24f214486d-apiservice-cert\") pod \"keystone-operator-controller-manager-698bd85bf4-klp5b\" (UID: \"6fc0444c-72a4-4172-ab52-4f24f214486d\") " pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.947920 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6fc0444c-72a4-4172-ab52-4f24f214486d-webhook-cert\") pod \"keystone-operator-controller-manager-698bd85bf4-klp5b\" (UID: \"6fc0444c-72a4-4172-ab52-4f24f214486d\") " pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.947974 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4twfs\" (UniqueName: \"kubernetes.io/projected/6fc0444c-72a4-4172-ab52-4f24f214486d-kube-api-access-4twfs\") pod \"keystone-operator-controller-manager-698bd85bf4-klp5b\" (UID: \"6fc0444c-72a4-4172-ab52-4f24f214486d\") " pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.954283 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6fc0444c-72a4-4172-ab52-4f24f214486d-apiservice-cert\") pod \"keystone-operator-controller-manager-698bd85bf4-klp5b\" (UID: \"6fc0444c-72a4-4172-ab52-4f24f214486d\") " pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.958234 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6fc0444c-72a4-4172-ab52-4f24f214486d-webhook-cert\") pod \"keystone-operator-controller-manager-698bd85bf4-klp5b\" (UID: \"6fc0444c-72a4-4172-ab52-4f24f214486d\") " pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:32 crc kubenswrapper[4746]: I0103 03:32:32.964674 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4twfs\" (UniqueName: \"kubernetes.io/projected/6fc0444c-72a4-4172-ab52-4f24f214486d-kube-api-access-4twfs\") pod \"keystone-operator-controller-manager-698bd85bf4-klp5b\" (UID: \"6fc0444c-72a4-4172-ab52-4f24f214486d\") " pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:33 crc kubenswrapper[4746]: I0103 03:32:33.070892 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:33 crc kubenswrapper[4746]: I0103 03:32:33.632686 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b"] Jan 03 03:32:33 crc kubenswrapper[4746]: W0103 03:32:33.641163 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fc0444c_72a4_4172_ab52_4f24f214486d.slice/crio-8d54a825a5edb6a1299700f9b940b2a5a11aecefec930bebda1c1f2102f59016 WatchSource:0}: Error finding container 8d54a825a5edb6a1299700f9b940b2a5a11aecefec930bebda1c1f2102f59016: Status 404 returned error can't find the container with id 8d54a825a5edb6a1299700f9b940b2a5a11aecefec930bebda1c1f2102f59016 Jan 03 03:32:34 crc kubenswrapper[4746]: I0103 03:32:34.074915 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" event={"ID":"6fc0444c-72a4-4172-ab52-4f24f214486d","Type":"ContainerStarted","Data":"8d54a825a5edb6a1299700f9b940b2a5a11aecefec930bebda1c1f2102f59016"} Jan 03 03:32:38 crc kubenswrapper[4746]: I0103 03:32:38.107769 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" event={"ID":"6fc0444c-72a4-4172-ab52-4f24f214486d","Type":"ContainerStarted","Data":"0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67"} Jan 03 03:32:38 crc kubenswrapper[4746]: I0103 03:32:38.108415 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:38 crc kubenswrapper[4746]: I0103 03:32:38.125609 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" podStartSLOduration=2.683923245 podStartE2EDuration="6.125593999s" podCreationTimestamp="2026-01-03 03:32:32 +0000 UTC" firstStartedPulling="2026-01-03 03:32:33.643379206 +0000 UTC m=+1073.493269511" lastFinishedPulling="2026-01-03 03:32:37.08504996 +0000 UTC m=+1076.934940265" observedRunningTime="2026-01-03 03:32:38.122607115 +0000 UTC m=+1077.972497420" watchObservedRunningTime="2026-01-03 03:32:38.125593999 +0000 UTC m=+1077.975484304" Jan 03 03:32:43 crc kubenswrapper[4746]: I0103 03:32:43.075373 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:32:47 crc kubenswrapper[4746]: I0103 03:32:47.523587 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-index-4xqm5"] Jan 03 03:32:47 crc kubenswrapper[4746]: I0103 03:32:47.524968 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-4xqm5" Jan 03 03:32:47 crc kubenswrapper[4746]: I0103 03:32:47.526880 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-index-dockercfg-24vg4" Jan 03 03:32:47 crc kubenswrapper[4746]: I0103 03:32:47.534458 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-4xqm5"] Jan 03 03:32:47 crc kubenswrapper[4746]: I0103 03:32:47.697392 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tslx6\" (UniqueName: \"kubernetes.io/projected/38ea823b-425b-4360-a481-a3719368104a-kube-api-access-tslx6\") pod \"barbican-operator-index-4xqm5\" (UID: \"38ea823b-425b-4360-a481-a3719368104a\") " pod="openstack-operators/barbican-operator-index-4xqm5" Jan 03 03:32:47 crc kubenswrapper[4746]: I0103 03:32:47.799122 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tslx6\" (UniqueName: \"kubernetes.io/projected/38ea823b-425b-4360-a481-a3719368104a-kube-api-access-tslx6\") pod \"barbican-operator-index-4xqm5\" (UID: \"38ea823b-425b-4360-a481-a3719368104a\") " pod="openstack-operators/barbican-operator-index-4xqm5" Jan 03 03:32:47 crc kubenswrapper[4746]: I0103 03:32:47.817099 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tslx6\" (UniqueName: \"kubernetes.io/projected/38ea823b-425b-4360-a481-a3719368104a-kube-api-access-tslx6\") pod \"barbican-operator-index-4xqm5\" (UID: \"38ea823b-425b-4360-a481-a3719368104a\") " pod="openstack-operators/barbican-operator-index-4xqm5" Jan 03 03:32:47 crc kubenswrapper[4746]: I0103 03:32:47.850853 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-4xqm5" Jan 03 03:32:48 crc kubenswrapper[4746]: I0103 03:32:48.249604 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-4xqm5"] Jan 03 03:32:48 crc kubenswrapper[4746]: W0103 03:32:48.253317 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38ea823b_425b_4360_a481_a3719368104a.slice/crio-a4b2f6d16717d31583d8fca856b2f5fdf35df3e0f2615aa787d02cfdb0cc2dd9 WatchSource:0}: Error finding container a4b2f6d16717d31583d8fca856b2f5fdf35df3e0f2615aa787d02cfdb0cc2dd9: Status 404 returned error can't find the container with id a4b2f6d16717d31583d8fca856b2f5fdf35df3e0f2615aa787d02cfdb0cc2dd9 Jan 03 03:32:49 crc kubenswrapper[4746]: I0103 03:32:49.189970 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-4xqm5" event={"ID":"38ea823b-425b-4360-a481-a3719368104a","Type":"ContainerStarted","Data":"a4b2f6d16717d31583d8fca856b2f5fdf35df3e0f2615aa787d02cfdb0cc2dd9"} Jan 03 03:32:51 crc kubenswrapper[4746]: I0103 03:32:51.203096 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-4xqm5" event={"ID":"38ea823b-425b-4360-a481-a3719368104a","Type":"ContainerStarted","Data":"048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6"} Jan 03 03:32:52 crc kubenswrapper[4746]: I0103 03:32:52.715894 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-index-4xqm5" podStartSLOduration=3.053678599 podStartE2EDuration="5.715872329s" podCreationTimestamp="2026-01-03 03:32:47 +0000 UTC" firstStartedPulling="2026-01-03 03:32:48.255676425 +0000 UTC m=+1088.105566730" lastFinishedPulling="2026-01-03 03:32:50.917870155 +0000 UTC m=+1090.767760460" observedRunningTime="2026-01-03 03:32:51.22842521 +0000 UTC m=+1091.078315505" watchObservedRunningTime="2026-01-03 03:32:52.715872329 +0000 UTC m=+1092.565762634" Jan 03 03:32:52 crc kubenswrapper[4746]: I0103 03:32:52.721295 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-index-4xqm5"] Jan 03 03:32:53 crc kubenswrapper[4746]: I0103 03:32:53.217123 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/barbican-operator-index-4xqm5" podUID="38ea823b-425b-4360-a481-a3719368104a" containerName="registry-server" containerID="cri-o://048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6" gracePeriod=2 Jan 03 03:32:53 crc kubenswrapper[4746]: I0103 03:32:53.325324 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-index-wdcpb"] Jan 03 03:32:53 crc kubenswrapper[4746]: I0103 03:32:53.326135 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-wdcpb" Jan 03 03:32:53 crc kubenswrapper[4746]: I0103 03:32:53.338241 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-wdcpb"] Jan 03 03:32:53 crc kubenswrapper[4746]: I0103 03:32:53.486797 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hzr8\" (UniqueName: \"kubernetes.io/projected/35d592dd-baad-44d9-9fc0-3eab11cea0b4-kube-api-access-7hzr8\") pod \"barbican-operator-index-wdcpb\" (UID: \"35d592dd-baad-44d9-9fc0-3eab11cea0b4\") " pod="openstack-operators/barbican-operator-index-wdcpb" Jan 03 03:32:53 crc kubenswrapper[4746]: I0103 03:32:53.588041 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hzr8\" (UniqueName: \"kubernetes.io/projected/35d592dd-baad-44d9-9fc0-3eab11cea0b4-kube-api-access-7hzr8\") pod \"barbican-operator-index-wdcpb\" (UID: \"35d592dd-baad-44d9-9fc0-3eab11cea0b4\") " pod="openstack-operators/barbican-operator-index-wdcpb" Jan 03 03:32:53 crc kubenswrapper[4746]: I0103 03:32:53.606206 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hzr8\" (UniqueName: \"kubernetes.io/projected/35d592dd-baad-44d9-9fc0-3eab11cea0b4-kube-api-access-7hzr8\") pod \"barbican-operator-index-wdcpb\" (UID: \"35d592dd-baad-44d9-9fc0-3eab11cea0b4\") " pod="openstack-operators/barbican-operator-index-wdcpb" Jan 03 03:32:53 crc kubenswrapper[4746]: I0103 03:32:53.697290 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-wdcpb" Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.117981 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-wdcpb"] Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.174455 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-4xqm5" Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.225343 4746 generic.go:334] "Generic (PLEG): container finished" podID="38ea823b-425b-4360-a481-a3719368104a" containerID="048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6" exitCode=0 Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.225407 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-4xqm5" Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.225411 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-4xqm5" event={"ID":"38ea823b-425b-4360-a481-a3719368104a","Type":"ContainerDied","Data":"048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6"} Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.225522 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-4xqm5" event={"ID":"38ea823b-425b-4360-a481-a3719368104a","Type":"ContainerDied","Data":"a4b2f6d16717d31583d8fca856b2f5fdf35df3e0f2615aa787d02cfdb0cc2dd9"} Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.225541 4746 scope.go:117] "RemoveContainer" containerID="048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6" Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.227059 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-wdcpb" event={"ID":"35d592dd-baad-44d9-9fc0-3eab11cea0b4","Type":"ContainerStarted","Data":"0cf5c529e6385ddafee830813a011b9a6cc4f3d8428f16bc02dc43c79d3f82b6"} Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.242249 4746 scope.go:117] "RemoveContainer" containerID="048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6" Jan 03 03:32:54 crc kubenswrapper[4746]: E0103 03:32:54.242627 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6\": container with ID starting with 048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6 not found: ID does not exist" containerID="048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6" Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.242668 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6"} err="failed to get container status \"048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6\": rpc error: code = NotFound desc = could not find container \"048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6\": container with ID starting with 048c5df0eb332b733fa8c21bd522734638eaa17f41afccd559d283691bb016d6 not found: ID does not exist" Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.300989 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tslx6\" (UniqueName: \"kubernetes.io/projected/38ea823b-425b-4360-a481-a3719368104a-kube-api-access-tslx6\") pod \"38ea823b-425b-4360-a481-a3719368104a\" (UID: \"38ea823b-425b-4360-a481-a3719368104a\") " Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.306593 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ea823b-425b-4360-a481-a3719368104a-kube-api-access-tslx6" (OuterVolumeSpecName: "kube-api-access-tslx6") pod "38ea823b-425b-4360-a481-a3719368104a" (UID: "38ea823b-425b-4360-a481-a3719368104a"). InnerVolumeSpecName "kube-api-access-tslx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.403509 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tslx6\" (UniqueName: \"kubernetes.io/projected/38ea823b-425b-4360-a481-a3719368104a-kube-api-access-tslx6\") on node \"crc\" DevicePath \"\"" Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.546444 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-index-4xqm5"] Jan 03 03:32:54 crc kubenswrapper[4746]: I0103 03:32:54.551115 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/barbican-operator-index-4xqm5"] Jan 03 03:32:55 crc kubenswrapper[4746]: I0103 03:32:55.236327 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-wdcpb" event={"ID":"35d592dd-baad-44d9-9fc0-3eab11cea0b4","Type":"ContainerStarted","Data":"10b33a74d0610f3bfff929006acf8d039fd15cd2d04047cc256b69d7447b0975"} Jan 03 03:32:55 crc kubenswrapper[4746]: I0103 03:32:55.251394 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-index-wdcpb" podStartSLOduration=2.205516359 podStartE2EDuration="2.251378101s" podCreationTimestamp="2026-01-03 03:32:53 +0000 UTC" firstStartedPulling="2026-01-03 03:32:54.131637305 +0000 UTC m=+1093.981527610" lastFinishedPulling="2026-01-03 03:32:54.177499047 +0000 UTC m=+1094.027389352" observedRunningTime="2026-01-03 03:32:55.247708052 +0000 UTC m=+1095.097598367" watchObservedRunningTime="2026-01-03 03:32:55.251378101 +0000 UTC m=+1095.101268406" Jan 03 03:32:56 crc kubenswrapper[4746]: I0103 03:32:56.472214 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38ea823b-425b-4360-a481-a3719368104a" path="/var/lib/kubelet/pods/38ea823b-425b-4360-a481-a3719368104a/volumes" Jan 03 03:33:01 crc kubenswrapper[4746]: I0103 03:33:01.373422 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:33:01 crc kubenswrapper[4746]: I0103 03:33:01.374006 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:33:03 crc kubenswrapper[4746]: I0103 03:33:03.697868 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-index-wdcpb" Jan 03 03:33:03 crc kubenswrapper[4746]: I0103 03:33:03.698211 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/barbican-operator-index-wdcpb" Jan 03 03:33:03 crc kubenswrapper[4746]: I0103 03:33:03.730866 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/barbican-operator-index-wdcpb" Jan 03 03:33:04 crc kubenswrapper[4746]: I0103 03:33:04.316833 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-index-wdcpb" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.173762 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/keystone-db-create-shs5n"] Jan 03 03:33:05 crc kubenswrapper[4746]: E0103 03:33:05.174035 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ea823b-425b-4360-a481-a3719368104a" containerName="registry-server" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.174047 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ea823b-425b-4360-a481-a3719368104a" containerName="registry-server" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.174182 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="38ea823b-425b-4360-a481-a3719368104a" containerName="registry-server" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.174986 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-db-create-shs5n" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.179600 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz"] Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.181002 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.183179 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone-db-secret" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.185836 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz"] Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.197270 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/keystone-db-create-shs5n"] Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.278681 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1-operator-scripts\") pod \"keystone-db-create-shs5n\" (UID: \"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1\") " pod="barbican-kuttl-tests/keystone-db-create-shs5n" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.278762 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lglg8\" (UniqueName: \"kubernetes.io/projected/3603205b-18b7-4254-860a-949ffb13bda2-kube-api-access-lglg8\") pod \"keystone-70a3-account-create-update-49wmz\" (UID: \"3603205b-18b7-4254-860a-949ffb13bda2\") " pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.278811 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxk9f\" (UniqueName: \"kubernetes.io/projected/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1-kube-api-access-zxk9f\") pod \"keystone-db-create-shs5n\" (UID: \"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1\") " pod="barbican-kuttl-tests/keystone-db-create-shs5n" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.278949 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3603205b-18b7-4254-860a-949ffb13bda2-operator-scripts\") pod \"keystone-70a3-account-create-update-49wmz\" (UID: \"3603205b-18b7-4254-860a-949ffb13bda2\") " pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.380639 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3603205b-18b7-4254-860a-949ffb13bda2-operator-scripts\") pod \"keystone-70a3-account-create-update-49wmz\" (UID: \"3603205b-18b7-4254-860a-949ffb13bda2\") " pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.380729 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1-operator-scripts\") pod \"keystone-db-create-shs5n\" (UID: \"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1\") " pod="barbican-kuttl-tests/keystone-db-create-shs5n" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.380757 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lglg8\" (UniqueName: \"kubernetes.io/projected/3603205b-18b7-4254-860a-949ffb13bda2-kube-api-access-lglg8\") pod \"keystone-70a3-account-create-update-49wmz\" (UID: \"3603205b-18b7-4254-860a-949ffb13bda2\") " pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.380788 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxk9f\" (UniqueName: \"kubernetes.io/projected/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1-kube-api-access-zxk9f\") pod \"keystone-db-create-shs5n\" (UID: \"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1\") " pod="barbican-kuttl-tests/keystone-db-create-shs5n" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.381774 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3603205b-18b7-4254-860a-949ffb13bda2-operator-scripts\") pod \"keystone-70a3-account-create-update-49wmz\" (UID: \"3603205b-18b7-4254-860a-949ffb13bda2\") " pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.382268 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1-operator-scripts\") pod \"keystone-db-create-shs5n\" (UID: \"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1\") " pod="barbican-kuttl-tests/keystone-db-create-shs5n" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.400440 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lglg8\" (UniqueName: \"kubernetes.io/projected/3603205b-18b7-4254-860a-949ffb13bda2-kube-api-access-lglg8\") pod \"keystone-70a3-account-create-update-49wmz\" (UID: \"3603205b-18b7-4254-860a-949ffb13bda2\") " pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.400791 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxk9f\" (UniqueName: \"kubernetes.io/projected/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1-kube-api-access-zxk9f\") pod \"keystone-db-create-shs5n\" (UID: \"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1\") " pod="barbican-kuttl-tests/keystone-db-create-shs5n" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.497685 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-db-create-shs5n" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.515482 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.938050 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz"] Jan 03 03:33:05 crc kubenswrapper[4746]: I0103 03:33:05.945133 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/keystone-db-create-shs5n"] Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.307142 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" event={"ID":"3603205b-18b7-4254-860a-949ffb13bda2","Type":"ContainerStarted","Data":"540892563bf0d073a46c7adcbd1e9b5bf1022cc1f634ed9f833c96e9328aaf5c"} Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.307902 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-db-create-shs5n" event={"ID":"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1","Type":"ContainerStarted","Data":"1ae71dbae86732308caa3d2f76789f5c4e905d58e220401c487043093b1ac699"} Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.558503 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk"] Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.560339 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.562784 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hpjh5" Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.570880 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk"] Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.698100 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a980463d-8e17-4fca-bdee-d83282ad9d37-util\") pod \"227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk\" (UID: \"a980463d-8e17-4fca-bdee-d83282ad9d37\") " pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.698518 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a980463d-8e17-4fca-bdee-d83282ad9d37-bundle\") pod \"227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk\" (UID: \"a980463d-8e17-4fca-bdee-d83282ad9d37\") " pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.698553 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgb27\" (UniqueName: \"kubernetes.io/projected/a980463d-8e17-4fca-bdee-d83282ad9d37-kube-api-access-pgb27\") pod \"227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk\" (UID: \"a980463d-8e17-4fca-bdee-d83282ad9d37\") " pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.800441 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a980463d-8e17-4fca-bdee-d83282ad9d37-bundle\") pod \"227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk\" (UID: \"a980463d-8e17-4fca-bdee-d83282ad9d37\") " pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.800510 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgb27\" (UniqueName: \"kubernetes.io/projected/a980463d-8e17-4fca-bdee-d83282ad9d37-kube-api-access-pgb27\") pod \"227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk\" (UID: \"a980463d-8e17-4fca-bdee-d83282ad9d37\") " pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.800576 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a980463d-8e17-4fca-bdee-d83282ad9d37-util\") pod \"227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk\" (UID: \"a980463d-8e17-4fca-bdee-d83282ad9d37\") " pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.801058 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a980463d-8e17-4fca-bdee-d83282ad9d37-bundle\") pod \"227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk\" (UID: \"a980463d-8e17-4fca-bdee-d83282ad9d37\") " pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.801073 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a980463d-8e17-4fca-bdee-d83282ad9d37-util\") pod \"227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk\" (UID: \"a980463d-8e17-4fca-bdee-d83282ad9d37\") " pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.829257 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgb27\" (UniqueName: \"kubernetes.io/projected/a980463d-8e17-4fca-bdee-d83282ad9d37-kube-api-access-pgb27\") pod \"227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk\" (UID: \"a980463d-8e17-4fca-bdee-d83282ad9d37\") " pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:06 crc kubenswrapper[4746]: I0103 03:33:06.884315 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:07 crc kubenswrapper[4746]: I0103 03:33:07.302794 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk"] Jan 03 03:33:07 crc kubenswrapper[4746]: I0103 03:33:07.314151 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" event={"ID":"a980463d-8e17-4fca-bdee-d83282ad9d37","Type":"ContainerStarted","Data":"cdab7c6a59c059f7dd60bb1d7cd5d53778fabdc187ff12f7a53cea642625a050"} Jan 03 03:33:07 crc kubenswrapper[4746]: I0103 03:33:07.315172 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" event={"ID":"3603205b-18b7-4254-860a-949ffb13bda2","Type":"ContainerStarted","Data":"7e7e96e28b54d657b50a8b11a3917ff0f1ecf4d9a9d20a95e46b69d17ce69a25"} Jan 03 03:33:07 crc kubenswrapper[4746]: I0103 03:33:07.317579 4746 generic.go:334] "Generic (PLEG): container finished" podID="f99f5dd0-17b8-45f8-97f7-6571b6c35ce1" containerID="6f914e617ac67949fde0fdc4b754492a875edb036f78bb63cdda646cdd3543b2" exitCode=0 Jan 03 03:33:07 crc kubenswrapper[4746]: I0103 03:33:07.317615 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-db-create-shs5n" event={"ID":"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1","Type":"ContainerDied","Data":"6f914e617ac67949fde0fdc4b754492a875edb036f78bb63cdda646cdd3543b2"} Jan 03 03:33:07 crc kubenswrapper[4746]: I0103 03:33:07.352584 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" podStartSLOduration=2.352565801 podStartE2EDuration="2.352565801s" podCreationTimestamp="2026-01-03 03:33:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:33:07.351816412 +0000 UTC m=+1107.201706727" watchObservedRunningTime="2026-01-03 03:33:07.352565801 +0000 UTC m=+1107.202456106" Jan 03 03:33:08 crc kubenswrapper[4746]: I0103 03:33:08.328064 4746 generic.go:334] "Generic (PLEG): container finished" podID="a980463d-8e17-4fca-bdee-d83282ad9d37" containerID="4f0f540e64bdcddc60f6423ace6903c848e5e902a1251dec4a2d41d368b988a0" exitCode=0 Jan 03 03:33:08 crc kubenswrapper[4746]: I0103 03:33:08.328432 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" event={"ID":"a980463d-8e17-4fca-bdee-d83282ad9d37","Type":"ContainerDied","Data":"4f0f540e64bdcddc60f6423ace6903c848e5e902a1251dec4a2d41d368b988a0"} Jan 03 03:33:08 crc kubenswrapper[4746]: I0103 03:33:08.330172 4746 generic.go:334] "Generic (PLEG): container finished" podID="3603205b-18b7-4254-860a-949ffb13bda2" containerID="7e7e96e28b54d657b50a8b11a3917ff0f1ecf4d9a9d20a95e46b69d17ce69a25" exitCode=0 Jan 03 03:33:08 crc kubenswrapper[4746]: I0103 03:33:08.330487 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" event={"ID":"3603205b-18b7-4254-860a-949ffb13bda2","Type":"ContainerDied","Data":"7e7e96e28b54d657b50a8b11a3917ff0f1ecf4d9a9d20a95e46b69d17ce69a25"} Jan 03 03:33:08 crc kubenswrapper[4746]: I0103 03:33:08.693002 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-db-create-shs5n" Jan 03 03:33:08 crc kubenswrapper[4746]: I0103 03:33:08.855528 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxk9f\" (UniqueName: \"kubernetes.io/projected/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1-kube-api-access-zxk9f\") pod \"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1\" (UID: \"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1\") " Jan 03 03:33:08 crc kubenswrapper[4746]: I0103 03:33:08.855738 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1-operator-scripts\") pod \"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1\" (UID: \"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1\") " Jan 03 03:33:08 crc kubenswrapper[4746]: I0103 03:33:08.856687 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f99f5dd0-17b8-45f8-97f7-6571b6c35ce1" (UID: "f99f5dd0-17b8-45f8-97f7-6571b6c35ce1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:33:08 crc kubenswrapper[4746]: I0103 03:33:08.871361 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1-kube-api-access-zxk9f" (OuterVolumeSpecName: "kube-api-access-zxk9f") pod "f99f5dd0-17b8-45f8-97f7-6571b6c35ce1" (UID: "f99f5dd0-17b8-45f8-97f7-6571b6c35ce1"). InnerVolumeSpecName "kube-api-access-zxk9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:33:08 crc kubenswrapper[4746]: I0103 03:33:08.957456 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxk9f\" (UniqueName: \"kubernetes.io/projected/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1-kube-api-access-zxk9f\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:08 crc kubenswrapper[4746]: I0103 03:33:08.957489 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:09 crc kubenswrapper[4746]: I0103 03:33:09.339272 4746 generic.go:334] "Generic (PLEG): container finished" podID="a980463d-8e17-4fca-bdee-d83282ad9d37" containerID="099ca9c676227b3ce1a7cdbfe483f97f503619a4559deb068507232f34210985" exitCode=0 Jan 03 03:33:09 crc kubenswrapper[4746]: I0103 03:33:09.339349 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" event={"ID":"a980463d-8e17-4fca-bdee-d83282ad9d37","Type":"ContainerDied","Data":"099ca9c676227b3ce1a7cdbfe483f97f503619a4559deb068507232f34210985"} Jan 03 03:33:09 crc kubenswrapper[4746]: I0103 03:33:09.341722 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-db-create-shs5n" Jan 03 03:33:09 crc kubenswrapper[4746]: I0103 03:33:09.342095 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-db-create-shs5n" event={"ID":"f99f5dd0-17b8-45f8-97f7-6571b6c35ce1","Type":"ContainerDied","Data":"1ae71dbae86732308caa3d2f76789f5c4e905d58e220401c487043093b1ac699"} Jan 03 03:33:09 crc kubenswrapper[4746]: I0103 03:33:09.342834 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ae71dbae86732308caa3d2f76789f5c4e905d58e220401c487043093b1ac699" Jan 03 03:33:09 crc kubenswrapper[4746]: I0103 03:33:09.607252 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" Jan 03 03:33:09 crc kubenswrapper[4746]: I0103 03:33:09.769818 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lglg8\" (UniqueName: \"kubernetes.io/projected/3603205b-18b7-4254-860a-949ffb13bda2-kube-api-access-lglg8\") pod \"3603205b-18b7-4254-860a-949ffb13bda2\" (UID: \"3603205b-18b7-4254-860a-949ffb13bda2\") " Jan 03 03:33:09 crc kubenswrapper[4746]: I0103 03:33:09.770064 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3603205b-18b7-4254-860a-949ffb13bda2-operator-scripts\") pod \"3603205b-18b7-4254-860a-949ffb13bda2\" (UID: \"3603205b-18b7-4254-860a-949ffb13bda2\") " Jan 03 03:33:09 crc kubenswrapper[4746]: I0103 03:33:09.770625 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3603205b-18b7-4254-860a-949ffb13bda2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3603205b-18b7-4254-860a-949ffb13bda2" (UID: "3603205b-18b7-4254-860a-949ffb13bda2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:33:09 crc kubenswrapper[4746]: I0103 03:33:09.775278 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3603205b-18b7-4254-860a-949ffb13bda2-kube-api-access-lglg8" (OuterVolumeSpecName: "kube-api-access-lglg8") pod "3603205b-18b7-4254-860a-949ffb13bda2" (UID: "3603205b-18b7-4254-860a-949ffb13bda2"). InnerVolumeSpecName "kube-api-access-lglg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:33:09 crc kubenswrapper[4746]: I0103 03:33:09.871783 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3603205b-18b7-4254-860a-949ffb13bda2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:09 crc kubenswrapper[4746]: I0103 03:33:09.871840 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lglg8\" (UniqueName: \"kubernetes.io/projected/3603205b-18b7-4254-860a-949ffb13bda2-kube-api-access-lglg8\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:10 crc kubenswrapper[4746]: I0103 03:33:10.357208 4746 generic.go:334] "Generic (PLEG): container finished" podID="a980463d-8e17-4fca-bdee-d83282ad9d37" containerID="bc893040ff857e35968ef8ef2bd18b2eb36280b83f5035a213cbe8ee62e16e21" exitCode=0 Jan 03 03:33:10 crc kubenswrapper[4746]: I0103 03:33:10.357566 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" event={"ID":"a980463d-8e17-4fca-bdee-d83282ad9d37","Type":"ContainerDied","Data":"bc893040ff857e35968ef8ef2bd18b2eb36280b83f5035a213cbe8ee62e16e21"} Jan 03 03:33:10 crc kubenswrapper[4746]: I0103 03:33:10.360371 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" event={"ID":"3603205b-18b7-4254-860a-949ffb13bda2","Type":"ContainerDied","Data":"540892563bf0d073a46c7adcbd1e9b5bf1022cc1f634ed9f833c96e9328aaf5c"} Jan 03 03:33:10 crc kubenswrapper[4746]: I0103 03:33:10.360398 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="540892563bf0d073a46c7adcbd1e9b5bf1022cc1f634ed9f833c96e9328aaf5c" Jan 03 03:33:10 crc kubenswrapper[4746]: I0103 03:33:10.360440 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz" Jan 03 03:33:11 crc kubenswrapper[4746]: I0103 03:33:11.661156 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:11 crc kubenswrapper[4746]: I0103 03:33:11.799861 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a980463d-8e17-4fca-bdee-d83282ad9d37-bundle\") pod \"a980463d-8e17-4fca-bdee-d83282ad9d37\" (UID: \"a980463d-8e17-4fca-bdee-d83282ad9d37\") " Jan 03 03:33:11 crc kubenswrapper[4746]: I0103 03:33:11.799923 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgb27\" (UniqueName: \"kubernetes.io/projected/a980463d-8e17-4fca-bdee-d83282ad9d37-kube-api-access-pgb27\") pod \"a980463d-8e17-4fca-bdee-d83282ad9d37\" (UID: \"a980463d-8e17-4fca-bdee-d83282ad9d37\") " Jan 03 03:33:11 crc kubenswrapper[4746]: I0103 03:33:11.800076 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a980463d-8e17-4fca-bdee-d83282ad9d37-util\") pod \"a980463d-8e17-4fca-bdee-d83282ad9d37\" (UID: \"a980463d-8e17-4fca-bdee-d83282ad9d37\") " Jan 03 03:33:11 crc kubenswrapper[4746]: I0103 03:33:11.801485 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a980463d-8e17-4fca-bdee-d83282ad9d37-bundle" (OuterVolumeSpecName: "bundle") pod "a980463d-8e17-4fca-bdee-d83282ad9d37" (UID: "a980463d-8e17-4fca-bdee-d83282ad9d37"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:33:11 crc kubenswrapper[4746]: I0103 03:33:11.804023 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a980463d-8e17-4fca-bdee-d83282ad9d37-kube-api-access-pgb27" (OuterVolumeSpecName: "kube-api-access-pgb27") pod "a980463d-8e17-4fca-bdee-d83282ad9d37" (UID: "a980463d-8e17-4fca-bdee-d83282ad9d37"). InnerVolumeSpecName "kube-api-access-pgb27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:33:11 crc kubenswrapper[4746]: I0103 03:33:11.813172 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a980463d-8e17-4fca-bdee-d83282ad9d37-util" (OuterVolumeSpecName: "util") pod "a980463d-8e17-4fca-bdee-d83282ad9d37" (UID: "a980463d-8e17-4fca-bdee-d83282ad9d37"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:33:11 crc kubenswrapper[4746]: I0103 03:33:11.902617 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a980463d-8e17-4fca-bdee-d83282ad9d37-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:11 crc kubenswrapper[4746]: I0103 03:33:11.902683 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgb27\" (UniqueName: \"kubernetes.io/projected/a980463d-8e17-4fca-bdee-d83282ad9d37-kube-api-access-pgb27\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:11 crc kubenswrapper[4746]: I0103 03:33:11.902698 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a980463d-8e17-4fca-bdee-d83282ad9d37-util\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:12 crc kubenswrapper[4746]: I0103 03:33:12.374143 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" event={"ID":"a980463d-8e17-4fca-bdee-d83282ad9d37","Type":"ContainerDied","Data":"cdab7c6a59c059f7dd60bb1d7cd5d53778fabdc187ff12f7a53cea642625a050"} Jan 03 03:33:12 crc kubenswrapper[4746]: I0103 03:33:12.374462 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdab7c6a59c059f7dd60bb1d7cd5d53778fabdc187ff12f7a53cea642625a050" Jan 03 03:33:12 crc kubenswrapper[4746]: I0103 03:33:12.374253 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.847001 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/keystone-db-sync-cnt9k"] Jan 03 03:33:15 crc kubenswrapper[4746]: E0103 03:33:15.847472 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99f5dd0-17b8-45f8-97f7-6571b6c35ce1" containerName="mariadb-database-create" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.847483 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99f5dd0-17b8-45f8-97f7-6571b6c35ce1" containerName="mariadb-database-create" Jan 03 03:33:15 crc kubenswrapper[4746]: E0103 03:33:15.847494 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a980463d-8e17-4fca-bdee-d83282ad9d37" containerName="pull" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.847500 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a980463d-8e17-4fca-bdee-d83282ad9d37" containerName="pull" Jan 03 03:33:15 crc kubenswrapper[4746]: E0103 03:33:15.847515 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a980463d-8e17-4fca-bdee-d83282ad9d37" containerName="util" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.847521 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a980463d-8e17-4fca-bdee-d83282ad9d37" containerName="util" Jan 03 03:33:15 crc kubenswrapper[4746]: E0103 03:33:15.847535 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3603205b-18b7-4254-860a-949ffb13bda2" containerName="mariadb-account-create-update" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.847542 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3603205b-18b7-4254-860a-949ffb13bda2" containerName="mariadb-account-create-update" Jan 03 03:33:15 crc kubenswrapper[4746]: E0103 03:33:15.847548 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a980463d-8e17-4fca-bdee-d83282ad9d37" containerName="extract" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.847554 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a980463d-8e17-4fca-bdee-d83282ad9d37" containerName="extract" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.847687 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f99f5dd0-17b8-45f8-97f7-6571b6c35ce1" containerName="mariadb-database-create" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.847700 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3603205b-18b7-4254-860a-949ffb13bda2" containerName="mariadb-account-create-update" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.847708 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a980463d-8e17-4fca-bdee-d83282ad9d37" containerName="extract" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.848133 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.849943 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.849943 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone-keystone-dockercfg-s75pt" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.850318 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone-scripts" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.850708 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone-config-data" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.862917 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/keystone-db-sync-cnt9k"] Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.967878 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hkj\" (UniqueName: \"kubernetes.io/projected/c8c34c1d-632f-406a-a5ae-1ce804ef6f66-kube-api-access-j9hkj\") pod \"keystone-db-sync-cnt9k\" (UID: \"c8c34c1d-632f-406a-a5ae-1ce804ef6f66\") " pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" Jan 03 03:33:15 crc kubenswrapper[4746]: I0103 03:33:15.967960 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c34c1d-632f-406a-a5ae-1ce804ef6f66-config-data\") pod \"keystone-db-sync-cnt9k\" (UID: \"c8c34c1d-632f-406a-a5ae-1ce804ef6f66\") " pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" Jan 03 03:33:16 crc kubenswrapper[4746]: I0103 03:33:16.069787 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9hkj\" (UniqueName: \"kubernetes.io/projected/c8c34c1d-632f-406a-a5ae-1ce804ef6f66-kube-api-access-j9hkj\") pod \"keystone-db-sync-cnt9k\" (UID: \"c8c34c1d-632f-406a-a5ae-1ce804ef6f66\") " pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" Jan 03 03:33:16 crc kubenswrapper[4746]: I0103 03:33:16.069890 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c34c1d-632f-406a-a5ae-1ce804ef6f66-config-data\") pod \"keystone-db-sync-cnt9k\" (UID: \"c8c34c1d-632f-406a-a5ae-1ce804ef6f66\") " pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" Jan 03 03:33:16 crc kubenswrapper[4746]: I0103 03:33:16.076861 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c34c1d-632f-406a-a5ae-1ce804ef6f66-config-data\") pod \"keystone-db-sync-cnt9k\" (UID: \"c8c34c1d-632f-406a-a5ae-1ce804ef6f66\") " pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" Jan 03 03:33:16 crc kubenswrapper[4746]: I0103 03:33:16.090099 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9hkj\" (UniqueName: \"kubernetes.io/projected/c8c34c1d-632f-406a-a5ae-1ce804ef6f66-kube-api-access-j9hkj\") pod \"keystone-db-sync-cnt9k\" (UID: \"c8c34c1d-632f-406a-a5ae-1ce804ef6f66\") " pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" Jan 03 03:33:16 crc kubenswrapper[4746]: I0103 03:33:16.165005 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" Jan 03 03:33:16 crc kubenswrapper[4746]: I0103 03:33:16.586448 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/keystone-db-sync-cnt9k"] Jan 03 03:33:17 crc kubenswrapper[4746]: I0103 03:33:17.412103 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" event={"ID":"c8c34c1d-632f-406a-a5ae-1ce804ef6f66","Type":"ContainerStarted","Data":"cc691705568c29085fc6f1687ddbc4a3afb02c1ea2b7d731f0f51ad32f2ff4cb"} Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.049374 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx"] Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.051712 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.055051 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-t69ks" Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.056059 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-service-cert" Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.068538 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx"] Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.229503 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-webhook-cert\") pod \"barbican-operator-controller-manager-5fc9c6ccf4-xgzjx\" (UID: \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\") " pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.229573 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-apiservice-cert\") pod \"barbican-operator-controller-manager-5fc9c6ccf4-xgzjx\" (UID: \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\") " pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.229609 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5g95\" (UniqueName: \"kubernetes.io/projected/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-kube-api-access-l5g95\") pod \"barbican-operator-controller-manager-5fc9c6ccf4-xgzjx\" (UID: \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\") " pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.330683 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-webhook-cert\") pod \"barbican-operator-controller-manager-5fc9c6ccf4-xgzjx\" (UID: \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\") " pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.330748 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-apiservice-cert\") pod \"barbican-operator-controller-manager-5fc9c6ccf4-xgzjx\" (UID: \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\") " pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.330776 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5g95\" (UniqueName: \"kubernetes.io/projected/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-kube-api-access-l5g95\") pod \"barbican-operator-controller-manager-5fc9c6ccf4-xgzjx\" (UID: \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\") " pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.339067 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-webhook-cert\") pod \"barbican-operator-controller-manager-5fc9c6ccf4-xgzjx\" (UID: \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\") " pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.352749 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-apiservice-cert\") pod \"barbican-operator-controller-manager-5fc9c6ccf4-xgzjx\" (UID: \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\") " pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.358109 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5g95\" (UniqueName: \"kubernetes.io/projected/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-kube-api-access-l5g95\") pod \"barbican-operator-controller-manager-5fc9c6ccf4-xgzjx\" (UID: \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\") " pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:24 crc kubenswrapper[4746]: I0103 03:33:24.369875 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:25 crc kubenswrapper[4746]: I0103 03:33:25.469138 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" event={"ID":"c8c34c1d-632f-406a-a5ae-1ce804ef6f66","Type":"ContainerStarted","Data":"b0d8afb23629647a4d2c1532dc37a6347eaca4e7cd09a6089edcb40aed085976"} Jan 03 03:33:25 crc kubenswrapper[4746]: I0103 03:33:25.492584 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" podStartSLOduration=1.918878566 podStartE2EDuration="10.49256036s" podCreationTimestamp="2026-01-03 03:33:15 +0000 UTC" firstStartedPulling="2026-01-03 03:33:16.593066726 +0000 UTC m=+1116.442957031" lastFinishedPulling="2026-01-03 03:33:25.16674852 +0000 UTC m=+1125.016638825" observedRunningTime="2026-01-03 03:33:25.482882783 +0000 UTC m=+1125.332773088" watchObservedRunningTime="2026-01-03 03:33:25.49256036 +0000 UTC m=+1125.342450665" Jan 03 03:33:25 crc kubenswrapper[4746]: I0103 03:33:25.513125 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx"] Jan 03 03:33:26 crc kubenswrapper[4746]: I0103 03:33:26.479938 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" event={"ID":"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87","Type":"ContainerStarted","Data":"1b8965c9a0c1a3b071c4a0b98c61e6281fe9a994b669cc785ee4c0f4eb8d7e3b"} Jan 03 03:33:27 crc kubenswrapper[4746]: I0103 03:33:27.490513 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" event={"ID":"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87","Type":"ContainerStarted","Data":"8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5"} Jan 03 03:33:27 crc kubenswrapper[4746]: I0103 03:33:27.490943 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:27 crc kubenswrapper[4746]: I0103 03:33:27.510851 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" podStartSLOduration=1.728556912 podStartE2EDuration="3.510828497s" podCreationTimestamp="2026-01-03 03:33:24 +0000 UTC" firstStartedPulling="2026-01-03 03:33:25.522527423 +0000 UTC m=+1125.372417768" lastFinishedPulling="2026-01-03 03:33:27.304799048 +0000 UTC m=+1127.154689353" observedRunningTime="2026-01-03 03:33:27.509979057 +0000 UTC m=+1127.359869362" watchObservedRunningTime="2026-01-03 03:33:27.510828497 +0000 UTC m=+1127.360718802" Jan 03 03:33:29 crc kubenswrapper[4746]: I0103 03:33:29.515385 4746 generic.go:334] "Generic (PLEG): container finished" podID="c8c34c1d-632f-406a-a5ae-1ce804ef6f66" containerID="b0d8afb23629647a4d2c1532dc37a6347eaca4e7cd09a6089edcb40aed085976" exitCode=0 Jan 03 03:33:29 crc kubenswrapper[4746]: I0103 03:33:29.515556 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" event={"ID":"c8c34c1d-632f-406a-a5ae-1ce804ef6f66","Type":"ContainerDied","Data":"b0d8afb23629647a4d2c1532dc37a6347eaca4e7cd09a6089edcb40aed085976"} Jan 03 03:33:30 crc kubenswrapper[4746]: I0103 03:33:30.906667 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.040931 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9hkj\" (UniqueName: \"kubernetes.io/projected/c8c34c1d-632f-406a-a5ae-1ce804ef6f66-kube-api-access-j9hkj\") pod \"c8c34c1d-632f-406a-a5ae-1ce804ef6f66\" (UID: \"c8c34c1d-632f-406a-a5ae-1ce804ef6f66\") " Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.040983 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c34c1d-632f-406a-a5ae-1ce804ef6f66-config-data\") pod \"c8c34c1d-632f-406a-a5ae-1ce804ef6f66\" (UID: \"c8c34c1d-632f-406a-a5ae-1ce804ef6f66\") " Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.046375 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c34c1d-632f-406a-a5ae-1ce804ef6f66-kube-api-access-j9hkj" (OuterVolumeSpecName: "kube-api-access-j9hkj") pod "c8c34c1d-632f-406a-a5ae-1ce804ef6f66" (UID: "c8c34c1d-632f-406a-a5ae-1ce804ef6f66"). InnerVolumeSpecName "kube-api-access-j9hkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.072093 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c34c1d-632f-406a-a5ae-1ce804ef6f66-config-data" (OuterVolumeSpecName: "config-data") pod "c8c34c1d-632f-406a-a5ae-1ce804ef6f66" (UID: "c8c34c1d-632f-406a-a5ae-1ce804ef6f66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.142641 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9hkj\" (UniqueName: \"kubernetes.io/projected/c8c34c1d-632f-406a-a5ae-1ce804ef6f66-kube-api-access-j9hkj\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.142695 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c34c1d-632f-406a-a5ae-1ce804ef6f66-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.373046 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.373356 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.373477 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.374295 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb6d369458d9ac55bbd1588092e61e42f348a71a898ff19ed28c8341fef5065e"} pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.374471 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" containerID="cri-o://eb6d369458d9ac55bbd1588092e61e42f348a71a898ff19ed28c8341fef5065e" gracePeriod=600 Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.542894 4746 generic.go:334] "Generic (PLEG): container finished" podID="00b3b853-9953-4039-964d-841a01708848" containerID="eb6d369458d9ac55bbd1588092e61e42f348a71a898ff19ed28c8341fef5065e" exitCode=0 Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.543000 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerDied","Data":"eb6d369458d9ac55bbd1588092e61e42f348a71a898ff19ed28c8341fef5065e"} Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.543045 4746 scope.go:117] "RemoveContainer" containerID="bf02736da0e4a31633cefadb1cc120b93c49d7b864f32b5d90a19ffe5e5a589f" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.545752 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" event={"ID":"c8c34c1d-632f-406a-a5ae-1ce804ef6f66","Type":"ContainerDied","Data":"cc691705568c29085fc6f1687ddbc4a3afb02c1ea2b7d731f0f51ad32f2ff4cb"} Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.545795 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-db-sync-cnt9k" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.545803 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc691705568c29085fc6f1687ddbc4a3afb02c1ea2b7d731f0f51ad32f2ff4cb" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.760287 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/keystone-bootstrap-brhg2"] Jan 03 03:33:31 crc kubenswrapper[4746]: E0103 03:33:31.760905 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c34c1d-632f-406a-a5ae-1ce804ef6f66" containerName="keystone-db-sync" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.760921 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c34c1d-632f-406a-a5ae-1ce804ef6f66" containerName="keystone-db-sync" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.761060 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c34c1d-632f-406a-a5ae-1ce804ef6f66" containerName="keystone-db-sync" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.761541 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.764675 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone-keystone-dockercfg-s75pt" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.764713 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone-scripts" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.764801 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone-config-data" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.764819 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.764902 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"osp-secret" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.782612 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/keystone-bootstrap-brhg2"] Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.851566 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-config-data\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.851639 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-fernet-keys\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.851920 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-scripts\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.852034 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwbbp\" (UniqueName: \"kubernetes.io/projected/682013af-1eb1-44f3-b23a-b63a262a94ba-kube-api-access-nwbbp\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.852112 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-credential-keys\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.953455 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-credential-keys\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.953549 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-config-data\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.953594 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-fernet-keys\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.953641 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-scripts\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.953687 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwbbp\" (UniqueName: \"kubernetes.io/projected/682013af-1eb1-44f3-b23a-b63a262a94ba-kube-api-access-nwbbp\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.961000 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-config-data\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.961281 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-scripts\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.963603 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-fernet-keys\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.968503 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-credential-keys\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:31 crc kubenswrapper[4746]: I0103 03:33:31.972248 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwbbp\" (UniqueName: \"kubernetes.io/projected/682013af-1eb1-44f3-b23a-b63a262a94ba-kube-api-access-nwbbp\") pod \"keystone-bootstrap-brhg2\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:32 crc kubenswrapper[4746]: I0103 03:33:32.086866 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:32 crc kubenswrapper[4746]: I0103 03:33:32.487452 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/keystone-bootstrap-brhg2"] Jan 03 03:33:32 crc kubenswrapper[4746]: W0103 03:33:32.500929 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod682013af_1eb1_44f3_b23a_b63a262a94ba.slice/crio-516c4a775143e78fcd809760263e20715ad47a243372171f50a08049419c7bb3 WatchSource:0}: Error finding container 516c4a775143e78fcd809760263e20715ad47a243372171f50a08049419c7bb3: Status 404 returned error can't find the container with id 516c4a775143e78fcd809760263e20715ad47a243372171f50a08049419c7bb3 Jan 03 03:33:32 crc kubenswrapper[4746]: I0103 03:33:32.556764 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerStarted","Data":"bc8caa044361bfc56c6c01ce89f41b5d201cf998f1072eb46a6767a1effaf4ee"} Jan 03 03:33:32 crc kubenswrapper[4746]: I0103 03:33:32.558153 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" event={"ID":"682013af-1eb1-44f3-b23a-b63a262a94ba","Type":"ContainerStarted","Data":"516c4a775143e78fcd809760263e20715ad47a243372171f50a08049419c7bb3"} Jan 03 03:33:33 crc kubenswrapper[4746]: I0103 03:33:33.566343 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" event={"ID":"682013af-1eb1-44f3-b23a-b63a262a94ba","Type":"ContainerStarted","Data":"0dc46200456d02e0528d713ed47f068152d526f711f7bc606b2054e92fbf792d"} Jan 03 03:33:33 crc kubenswrapper[4746]: I0103 03:33:33.586535 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" podStartSLOduration=2.58652032 podStartE2EDuration="2.58652032s" podCreationTimestamp="2026-01-03 03:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:33:33.582900262 +0000 UTC m=+1133.432790587" watchObservedRunningTime="2026-01-03 03:33:33.58652032 +0000 UTC m=+1133.436410615" Jan 03 03:33:34 crc kubenswrapper[4746]: I0103 03:33:34.375058 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:33:37 crc kubenswrapper[4746]: E0103 03:33:37.046945 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod682013af_1eb1_44f3_b23a_b63a262a94ba.slice/crio-conmon-0dc46200456d02e0528d713ed47f068152d526f711f7bc606b2054e92fbf792d.scope\": RecentStats: unable to find data in memory cache]" Jan 03 03:33:37 crc kubenswrapper[4746]: I0103 03:33:37.599245 4746 generic.go:334] "Generic (PLEG): container finished" podID="682013af-1eb1-44f3-b23a-b63a262a94ba" containerID="0dc46200456d02e0528d713ed47f068152d526f711f7bc606b2054e92fbf792d" exitCode=0 Jan 03 03:33:37 crc kubenswrapper[4746]: I0103 03:33:37.599284 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" event={"ID":"682013af-1eb1-44f3-b23a-b63a262a94ba","Type":"ContainerDied","Data":"0dc46200456d02e0528d713ed47f068152d526f711f7bc606b2054e92fbf792d"} Jan 03 03:33:38 crc kubenswrapper[4746]: I0103 03:33:38.946196 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.058573 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-config-data\") pod \"682013af-1eb1-44f3-b23a-b63a262a94ba\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.058681 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwbbp\" (UniqueName: \"kubernetes.io/projected/682013af-1eb1-44f3-b23a-b63a262a94ba-kube-api-access-nwbbp\") pod \"682013af-1eb1-44f3-b23a-b63a262a94ba\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.058720 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-fernet-keys\") pod \"682013af-1eb1-44f3-b23a-b63a262a94ba\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.058810 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-scripts\") pod \"682013af-1eb1-44f3-b23a-b63a262a94ba\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.058836 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-credential-keys\") pod \"682013af-1eb1-44f3-b23a-b63a262a94ba\" (UID: \"682013af-1eb1-44f3-b23a-b63a262a94ba\") " Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.064167 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "682013af-1eb1-44f3-b23a-b63a262a94ba" (UID: "682013af-1eb1-44f3-b23a-b63a262a94ba"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.064242 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-scripts" (OuterVolumeSpecName: "scripts") pod "682013af-1eb1-44f3-b23a-b63a262a94ba" (UID: "682013af-1eb1-44f3-b23a-b63a262a94ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.064435 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/682013af-1eb1-44f3-b23a-b63a262a94ba-kube-api-access-nwbbp" (OuterVolumeSpecName: "kube-api-access-nwbbp") pod "682013af-1eb1-44f3-b23a-b63a262a94ba" (UID: "682013af-1eb1-44f3-b23a-b63a262a94ba"). InnerVolumeSpecName "kube-api-access-nwbbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.067909 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "682013af-1eb1-44f3-b23a-b63a262a94ba" (UID: "682013af-1eb1-44f3-b23a-b63a262a94ba"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.077069 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-config-data" (OuterVolumeSpecName: "config-data") pod "682013af-1eb1-44f3-b23a-b63a262a94ba" (UID: "682013af-1eb1-44f3-b23a-b63a262a94ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.160103 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.160136 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwbbp\" (UniqueName: \"kubernetes.io/projected/682013af-1eb1-44f3-b23a-b63a262a94ba-kube-api-access-nwbbp\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.160149 4746 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.160158 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.160166 4746 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/682013af-1eb1-44f3-b23a-b63a262a94ba-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.615884 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" event={"ID":"682013af-1eb1-44f3-b23a-b63a262a94ba","Type":"ContainerDied","Data":"516c4a775143e78fcd809760263e20715ad47a243372171f50a08049419c7bb3"} Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.615954 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="516c4a775143e78fcd809760263e20715ad47a243372171f50a08049419c7bb3" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.615957 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-bootstrap-brhg2" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.697339 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/keystone-5794645689-k95gh"] Jan 03 03:33:39 crc kubenswrapper[4746]: E0103 03:33:39.697685 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="682013af-1eb1-44f3-b23a-b63a262a94ba" containerName="keystone-bootstrap" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.697703 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="682013af-1eb1-44f3-b23a-b63a262a94ba" containerName="keystone-bootstrap" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.697835 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="682013af-1eb1-44f3-b23a-b63a262a94ba" containerName="keystone-bootstrap" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.698404 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.700738 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone-scripts" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.700985 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone-config-data" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.703282 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.703348 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"keystone-keystone-dockercfg-s75pt" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.705089 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/keystone-5794645689-k95gh"] Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.868619 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt7tv\" (UniqueName: \"kubernetes.io/projected/0f024828-5253-4134-bd22-720212206aa3-kube-api-access-bt7tv\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.868722 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-credential-keys\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.868922 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-scripts\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.868960 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-config-data\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.869023 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-fernet-keys\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.970587 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-credential-keys\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.971289 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-scripts\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.971376 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-config-data\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.971485 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-fernet-keys\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.971580 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt7tv\" (UniqueName: \"kubernetes.io/projected/0f024828-5253-4134-bd22-720212206aa3-kube-api-access-bt7tv\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.975119 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-credential-keys\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.975255 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-config-data\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.977221 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-fernet-keys\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.979072 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-scripts\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:39 crc kubenswrapper[4746]: I0103 03:33:39.988165 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt7tv\" (UniqueName: \"kubernetes.io/projected/0f024828-5253-4134-bd22-720212206aa3-kube-api-access-bt7tv\") pod \"keystone-5794645689-k95gh\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.026745 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.236683 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-db-create-jhb4t"] Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.237598 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-create-jhb4t" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.242421 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-db-create-jhb4t"] Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.337511 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-418b-account-create-update-xljkl"] Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.338298 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.343209 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-db-secret" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.350315 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-418b-account-create-update-xljkl"] Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.377957 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jw7c\" (UniqueName: \"kubernetes.io/projected/bad1b5c3-7f2a-4012-8f30-eec86173cce1-kube-api-access-4jw7c\") pod \"barbican-db-create-jhb4t\" (UID: \"bad1b5c3-7f2a-4012-8f30-eec86173cce1\") " pod="barbican-kuttl-tests/barbican-db-create-jhb4t" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.378043 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad1b5c3-7f2a-4012-8f30-eec86173cce1-operator-scripts\") pod \"barbican-db-create-jhb4t\" (UID: \"bad1b5c3-7f2a-4012-8f30-eec86173cce1\") " pod="barbican-kuttl-tests/barbican-db-create-jhb4t" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.479688 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jw7c\" (UniqueName: \"kubernetes.io/projected/bad1b5c3-7f2a-4012-8f30-eec86173cce1-kube-api-access-4jw7c\") pod \"barbican-db-create-jhb4t\" (UID: \"bad1b5c3-7f2a-4012-8f30-eec86173cce1\") " pod="barbican-kuttl-tests/barbican-db-create-jhb4t" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.479964 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad1b5c3-7f2a-4012-8f30-eec86173cce1-operator-scripts\") pod \"barbican-db-create-jhb4t\" (UID: \"bad1b5c3-7f2a-4012-8f30-eec86173cce1\") " pod="barbican-kuttl-tests/barbican-db-create-jhb4t" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.480092 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgns\" (UniqueName: \"kubernetes.io/projected/fc637f0c-5764-44a6-8a51-52e17b52380d-kube-api-access-2kgns\") pod \"barbican-418b-account-create-update-xljkl\" (UID: \"fc637f0c-5764-44a6-8a51-52e17b52380d\") " pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.480203 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc637f0c-5764-44a6-8a51-52e17b52380d-operator-scripts\") pod \"barbican-418b-account-create-update-xljkl\" (UID: \"fc637f0c-5764-44a6-8a51-52e17b52380d\") " pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.480982 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad1b5c3-7f2a-4012-8f30-eec86173cce1-operator-scripts\") pod \"barbican-db-create-jhb4t\" (UID: \"bad1b5c3-7f2a-4012-8f30-eec86173cce1\") " pod="barbican-kuttl-tests/barbican-db-create-jhb4t" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.501791 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jw7c\" (UniqueName: \"kubernetes.io/projected/bad1b5c3-7f2a-4012-8f30-eec86173cce1-kube-api-access-4jw7c\") pod \"barbican-db-create-jhb4t\" (UID: \"bad1b5c3-7f2a-4012-8f30-eec86173cce1\") " pod="barbican-kuttl-tests/barbican-db-create-jhb4t" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.511083 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/keystone-5794645689-k95gh"] Jan 03 03:33:40 crc kubenswrapper[4746]: W0103 03:33:40.514132 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f024828_5253_4134_bd22_720212206aa3.slice/crio-1093bb8cc5303bbd7fcfbbc1e4899cd63f70770833bd7e10fbfaafa1dcfa12a8 WatchSource:0}: Error finding container 1093bb8cc5303bbd7fcfbbc1e4899cd63f70770833bd7e10fbfaafa1dcfa12a8: Status 404 returned error can't find the container with id 1093bb8cc5303bbd7fcfbbc1e4899cd63f70770833bd7e10fbfaafa1dcfa12a8 Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.558279 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-create-jhb4t" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.583102 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kgns\" (UniqueName: \"kubernetes.io/projected/fc637f0c-5764-44a6-8a51-52e17b52380d-kube-api-access-2kgns\") pod \"barbican-418b-account-create-update-xljkl\" (UID: \"fc637f0c-5764-44a6-8a51-52e17b52380d\") " pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.583283 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc637f0c-5764-44a6-8a51-52e17b52380d-operator-scripts\") pod \"barbican-418b-account-create-update-xljkl\" (UID: \"fc637f0c-5764-44a6-8a51-52e17b52380d\") " pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.584140 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc637f0c-5764-44a6-8a51-52e17b52380d-operator-scripts\") pod \"barbican-418b-account-create-update-xljkl\" (UID: \"fc637f0c-5764-44a6-8a51-52e17b52380d\") " pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.599367 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kgns\" (UniqueName: \"kubernetes.io/projected/fc637f0c-5764-44a6-8a51-52e17b52380d-kube-api-access-2kgns\") pod \"barbican-418b-account-create-update-xljkl\" (UID: \"fc637f0c-5764-44a6-8a51-52e17b52380d\") " pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.624496 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-5794645689-k95gh" event={"ID":"0f024828-5253-4134-bd22-720212206aa3","Type":"ContainerStarted","Data":"1093bb8cc5303bbd7fcfbbc1e4899cd63f70770833bd7e10fbfaafa1dcfa12a8"} Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.677407 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" Jan 03 03:33:40 crc kubenswrapper[4746]: I0103 03:33:40.996929 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-db-create-jhb4t"] Jan 03 03:33:41 crc kubenswrapper[4746]: W0103 03:33:41.006714 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbad1b5c3_7f2a_4012_8f30_eec86173cce1.slice/crio-1fe40ddd8e70a70ad4cbad878f9ee03e006fafe9c13405632c742f1eb0fc6d19 WatchSource:0}: Error finding container 1fe40ddd8e70a70ad4cbad878f9ee03e006fafe9c13405632c742f1eb0fc6d19: Status 404 returned error can't find the container with id 1fe40ddd8e70a70ad4cbad878f9ee03e006fafe9c13405632c742f1eb0fc6d19 Jan 03 03:33:41 crc kubenswrapper[4746]: I0103 03:33:41.138500 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-418b-account-create-update-xljkl"] Jan 03 03:33:41 crc kubenswrapper[4746]: W0103 03:33:41.179718 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc637f0c_5764_44a6_8a51_52e17b52380d.slice/crio-62f6ec777d64a434dfdbacec25652ea5ec80afbbf5eb77a51fe822d54e09dffd WatchSource:0}: Error finding container 62f6ec777d64a434dfdbacec25652ea5ec80afbbf5eb77a51fe822d54e09dffd: Status 404 returned error can't find the container with id 62f6ec777d64a434dfdbacec25652ea5ec80afbbf5eb77a51fe822d54e09dffd Jan 03 03:33:41 crc kubenswrapper[4746]: I0103 03:33:41.193117 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-db-secret" Jan 03 03:33:41 crc kubenswrapper[4746]: I0103 03:33:41.636929 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-create-jhb4t" event={"ID":"bad1b5c3-7f2a-4012-8f30-eec86173cce1","Type":"ContainerStarted","Data":"1fe40ddd8e70a70ad4cbad878f9ee03e006fafe9c13405632c742f1eb0fc6d19"} Jan 03 03:33:41 crc kubenswrapper[4746]: I0103 03:33:41.638035 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" event={"ID":"fc637f0c-5764-44a6-8a51-52e17b52380d","Type":"ContainerStarted","Data":"62f6ec777d64a434dfdbacec25652ea5ec80afbbf5eb77a51fe822d54e09dffd"} Jan 03 03:33:42 crc kubenswrapper[4746]: I0103 03:33:42.647579 4746 generic.go:334] "Generic (PLEG): container finished" podID="bad1b5c3-7f2a-4012-8f30-eec86173cce1" containerID="42b91fe797247e406f429c6dd3c833224c21730cb6b19203c56d7e1b179e75cf" exitCode=0 Jan 03 03:33:42 crc kubenswrapper[4746]: I0103 03:33:42.648038 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-create-jhb4t" event={"ID":"bad1b5c3-7f2a-4012-8f30-eec86173cce1","Type":"ContainerDied","Data":"42b91fe797247e406f429c6dd3c833224c21730cb6b19203c56d7e1b179e75cf"} Jan 03 03:33:42 crc kubenswrapper[4746]: I0103 03:33:42.650749 4746 generic.go:334] "Generic (PLEG): container finished" podID="fc637f0c-5764-44a6-8a51-52e17b52380d" containerID="9a849bbf850ecff68c4860a7c9e91355b95525fc93a31abea057f68450b8e567" exitCode=0 Jan 03 03:33:42 crc kubenswrapper[4746]: I0103 03:33:42.650860 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" event={"ID":"fc637f0c-5764-44a6-8a51-52e17b52380d","Type":"ContainerDied","Data":"9a849bbf850ecff68c4860a7c9e91355b95525fc93a31abea057f68450b8e567"} Jan 03 03:33:42 crc kubenswrapper[4746]: I0103 03:33:42.656272 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-5794645689-k95gh" event={"ID":"0f024828-5253-4134-bd22-720212206aa3","Type":"ContainerStarted","Data":"3203a1e3735d8352f746e109713c8ef7f8f9177307cb32b3d33fe0243ca882a8"} Jan 03 03:33:42 crc kubenswrapper[4746]: I0103 03:33:42.657007 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:33:42 crc kubenswrapper[4746]: I0103 03:33:42.679581 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/keystone-5794645689-k95gh" podStartSLOduration=3.6795608079999997 podStartE2EDuration="3.679560808s" podCreationTimestamp="2026-01-03 03:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:33:42.679028065 +0000 UTC m=+1142.528918380" watchObservedRunningTime="2026-01-03 03:33:42.679560808 +0000 UTC m=+1142.529451113" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.047224 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.050846 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-create-jhb4t" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.139821 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kgns\" (UniqueName: \"kubernetes.io/projected/fc637f0c-5764-44a6-8a51-52e17b52380d-kube-api-access-2kgns\") pod \"fc637f0c-5764-44a6-8a51-52e17b52380d\" (UID: \"fc637f0c-5764-44a6-8a51-52e17b52380d\") " Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.140145 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc637f0c-5764-44a6-8a51-52e17b52380d-operator-scripts\") pod \"fc637f0c-5764-44a6-8a51-52e17b52380d\" (UID: \"fc637f0c-5764-44a6-8a51-52e17b52380d\") " Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.140606 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc637f0c-5764-44a6-8a51-52e17b52380d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc637f0c-5764-44a6-8a51-52e17b52380d" (UID: "fc637f0c-5764-44a6-8a51-52e17b52380d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.146526 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc637f0c-5764-44a6-8a51-52e17b52380d-kube-api-access-2kgns" (OuterVolumeSpecName: "kube-api-access-2kgns") pod "fc637f0c-5764-44a6-8a51-52e17b52380d" (UID: "fc637f0c-5764-44a6-8a51-52e17b52380d"). InnerVolumeSpecName "kube-api-access-2kgns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.242080 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jw7c\" (UniqueName: \"kubernetes.io/projected/bad1b5c3-7f2a-4012-8f30-eec86173cce1-kube-api-access-4jw7c\") pod \"bad1b5c3-7f2a-4012-8f30-eec86173cce1\" (UID: \"bad1b5c3-7f2a-4012-8f30-eec86173cce1\") " Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.242234 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad1b5c3-7f2a-4012-8f30-eec86173cce1-operator-scripts\") pod \"bad1b5c3-7f2a-4012-8f30-eec86173cce1\" (UID: \"bad1b5c3-7f2a-4012-8f30-eec86173cce1\") " Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.242598 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kgns\" (UniqueName: \"kubernetes.io/projected/fc637f0c-5764-44a6-8a51-52e17b52380d-kube-api-access-2kgns\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.242621 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc637f0c-5764-44a6-8a51-52e17b52380d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.242898 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bad1b5c3-7f2a-4012-8f30-eec86173cce1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bad1b5c3-7f2a-4012-8f30-eec86173cce1" (UID: "bad1b5c3-7f2a-4012-8f30-eec86173cce1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.244623 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad1b5c3-7f2a-4012-8f30-eec86173cce1-kube-api-access-4jw7c" (OuterVolumeSpecName: "kube-api-access-4jw7c") pod "bad1b5c3-7f2a-4012-8f30-eec86173cce1" (UID: "bad1b5c3-7f2a-4012-8f30-eec86173cce1"). InnerVolumeSpecName "kube-api-access-4jw7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.344061 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jw7c\" (UniqueName: \"kubernetes.io/projected/bad1b5c3-7f2a-4012-8f30-eec86173cce1-kube-api-access-4jw7c\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.344088 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad1b5c3-7f2a-4012-8f30-eec86173cce1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.669797 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-create-jhb4t" event={"ID":"bad1b5c3-7f2a-4012-8f30-eec86173cce1","Type":"ContainerDied","Data":"1fe40ddd8e70a70ad4cbad878f9ee03e006fafe9c13405632c742f1eb0fc6d19"} Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.669832 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-create-jhb4t" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.669844 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fe40ddd8e70a70ad4cbad878f9ee03e006fafe9c13405632c742f1eb0fc6d19" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.671349 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" event={"ID":"fc637f0c-5764-44a6-8a51-52e17b52380d","Type":"ContainerDied","Data":"62f6ec777d64a434dfdbacec25652ea5ec80afbbf5eb77a51fe822d54e09dffd"} Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.671378 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62f6ec777d64a434dfdbacec25652ea5ec80afbbf5eb77a51fe822d54e09dffd" Jan 03 03:33:44 crc kubenswrapper[4746]: I0103 03:33:44.671423 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-418b-account-create-update-xljkl" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.627912 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-db-sync-v56r7"] Jan 03 03:33:45 crc kubenswrapper[4746]: E0103 03:33:45.628184 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc637f0c-5764-44a6-8a51-52e17b52380d" containerName="mariadb-account-create-update" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.628197 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc637f0c-5764-44a6-8a51-52e17b52380d" containerName="mariadb-account-create-update" Jan 03 03:33:45 crc kubenswrapper[4746]: E0103 03:33:45.628207 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad1b5c3-7f2a-4012-8f30-eec86173cce1" containerName="mariadb-database-create" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.628213 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad1b5c3-7f2a-4012-8f30-eec86173cce1" containerName="mariadb-database-create" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.628334 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc637f0c-5764-44a6-8a51-52e17b52380d" containerName="mariadb-account-create-update" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.628347 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad1b5c3-7f2a-4012-8f30-eec86173cce1" containerName="mariadb-database-create" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.628782 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-sync-v56r7" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.631278 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-barbican-dockercfg-67nfm" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.635255 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-config-data" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.641135 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-db-sync-v56r7"] Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.763729 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01c9efa5-01a6-4772-b77a-37d244c2696b-db-sync-config-data\") pod \"barbican-db-sync-v56r7\" (UID: \"01c9efa5-01a6-4772-b77a-37d244c2696b\") " pod="barbican-kuttl-tests/barbican-db-sync-v56r7" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.763964 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8t85\" (UniqueName: \"kubernetes.io/projected/01c9efa5-01a6-4772-b77a-37d244c2696b-kube-api-access-t8t85\") pod \"barbican-db-sync-v56r7\" (UID: \"01c9efa5-01a6-4772-b77a-37d244c2696b\") " pod="barbican-kuttl-tests/barbican-db-sync-v56r7" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.865362 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8t85\" (UniqueName: \"kubernetes.io/projected/01c9efa5-01a6-4772-b77a-37d244c2696b-kube-api-access-t8t85\") pod \"barbican-db-sync-v56r7\" (UID: \"01c9efa5-01a6-4772-b77a-37d244c2696b\") " pod="barbican-kuttl-tests/barbican-db-sync-v56r7" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.865428 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01c9efa5-01a6-4772-b77a-37d244c2696b-db-sync-config-data\") pod \"barbican-db-sync-v56r7\" (UID: \"01c9efa5-01a6-4772-b77a-37d244c2696b\") " pod="barbican-kuttl-tests/barbican-db-sync-v56r7" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.872489 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01c9efa5-01a6-4772-b77a-37d244c2696b-db-sync-config-data\") pod \"barbican-db-sync-v56r7\" (UID: \"01c9efa5-01a6-4772-b77a-37d244c2696b\") " pod="barbican-kuttl-tests/barbican-db-sync-v56r7" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.883195 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8t85\" (UniqueName: \"kubernetes.io/projected/01c9efa5-01a6-4772-b77a-37d244c2696b-kube-api-access-t8t85\") pod \"barbican-db-sync-v56r7\" (UID: \"01c9efa5-01a6-4772-b77a-37d244c2696b\") " pod="barbican-kuttl-tests/barbican-db-sync-v56r7" Jan 03 03:33:45 crc kubenswrapper[4746]: I0103 03:33:45.997287 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-sync-v56r7" Jan 03 03:33:46 crc kubenswrapper[4746]: I0103 03:33:46.441756 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-db-sync-v56r7"] Jan 03 03:33:46 crc kubenswrapper[4746]: I0103 03:33:46.692761 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-sync-v56r7" event={"ID":"01c9efa5-01a6-4772-b77a-37d244c2696b","Type":"ContainerStarted","Data":"2a4a1e250664490f08f0a21a59b850d02c7cd7bb5f8e3b0bc141daafe42191c6"} Jan 03 03:33:50 crc kubenswrapper[4746]: I0103 03:33:50.722474 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-sync-v56r7" event={"ID":"01c9efa5-01a6-4772-b77a-37d244c2696b","Type":"ContainerStarted","Data":"dfa75fa462d2e01b13eab237fc53682ed9b3833471de7fdadbe87072a0c787ba"} Jan 03 03:33:50 crc kubenswrapper[4746]: I0103 03:33:50.743710 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/barbican-db-sync-v56r7" podStartSLOduration=1.772727348 podStartE2EDuration="5.743689499s" podCreationTimestamp="2026-01-03 03:33:45 +0000 UTC" firstStartedPulling="2026-01-03 03:33:46.441743822 +0000 UTC m=+1146.291634127" lastFinishedPulling="2026-01-03 03:33:50.412705943 +0000 UTC m=+1150.262596278" observedRunningTime="2026-01-03 03:33:50.738873291 +0000 UTC m=+1150.588763606" watchObservedRunningTime="2026-01-03 03:33:50.743689499 +0000 UTC m=+1150.593579804" Jan 03 03:33:53 crc kubenswrapper[4746]: I0103 03:33:53.748527 4746 generic.go:334] "Generic (PLEG): container finished" podID="01c9efa5-01a6-4772-b77a-37d244c2696b" containerID="dfa75fa462d2e01b13eab237fc53682ed9b3833471de7fdadbe87072a0c787ba" exitCode=0 Jan 03 03:33:53 crc kubenswrapper[4746]: I0103 03:33:53.748601 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-sync-v56r7" event={"ID":"01c9efa5-01a6-4772-b77a-37d244c2696b","Type":"ContainerDied","Data":"dfa75fa462d2e01b13eab237fc53682ed9b3833471de7fdadbe87072a0c787ba"} Jan 03 03:33:55 crc kubenswrapper[4746]: I0103 03:33:55.147618 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-sync-v56r7" Jan 03 03:33:55 crc kubenswrapper[4746]: I0103 03:33:55.223893 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01c9efa5-01a6-4772-b77a-37d244c2696b-db-sync-config-data\") pod \"01c9efa5-01a6-4772-b77a-37d244c2696b\" (UID: \"01c9efa5-01a6-4772-b77a-37d244c2696b\") " Jan 03 03:33:55 crc kubenswrapper[4746]: I0103 03:33:55.223992 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8t85\" (UniqueName: \"kubernetes.io/projected/01c9efa5-01a6-4772-b77a-37d244c2696b-kube-api-access-t8t85\") pod \"01c9efa5-01a6-4772-b77a-37d244c2696b\" (UID: \"01c9efa5-01a6-4772-b77a-37d244c2696b\") " Jan 03 03:33:55 crc kubenswrapper[4746]: I0103 03:33:55.229692 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c9efa5-01a6-4772-b77a-37d244c2696b-kube-api-access-t8t85" (OuterVolumeSpecName: "kube-api-access-t8t85") pod "01c9efa5-01a6-4772-b77a-37d244c2696b" (UID: "01c9efa5-01a6-4772-b77a-37d244c2696b"). InnerVolumeSpecName "kube-api-access-t8t85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:33:55 crc kubenswrapper[4746]: I0103 03:33:55.231104 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c9efa5-01a6-4772-b77a-37d244c2696b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "01c9efa5-01a6-4772-b77a-37d244c2696b" (UID: "01c9efa5-01a6-4772-b77a-37d244c2696b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:33:55 crc kubenswrapper[4746]: I0103 03:33:55.325678 4746 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/01c9efa5-01a6-4772-b77a-37d244c2696b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:55 crc kubenswrapper[4746]: I0103 03:33:55.325718 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8t85\" (UniqueName: \"kubernetes.io/projected/01c9efa5-01a6-4772-b77a-37d244c2696b-kube-api-access-t8t85\") on node \"crc\" DevicePath \"\"" Jan 03 03:33:55 crc kubenswrapper[4746]: I0103 03:33:55.774208 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-sync-v56r7" event={"ID":"01c9efa5-01a6-4772-b77a-37d244c2696b","Type":"ContainerDied","Data":"2a4a1e250664490f08f0a21a59b850d02c7cd7bb5f8e3b0bc141daafe42191c6"} Jan 03 03:33:55 crc kubenswrapper[4746]: I0103 03:33:55.774514 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a4a1e250664490f08f0a21a59b850d02c7cd7bb5f8e3b0bc141daafe42191c6" Jan 03 03:33:55 crc kubenswrapper[4746]: I0103 03:33:55.774269 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-sync-v56r7" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.108953 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4"] Jan 03 03:33:56 crc kubenswrapper[4746]: E0103 03:33:56.109269 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c9efa5-01a6-4772-b77a-37d244c2696b" containerName="barbican-db-sync" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.109288 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c9efa5-01a6-4772-b77a-37d244c2696b" containerName="barbican-db-sync" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.109423 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c9efa5-01a6-4772-b77a-37d244c2696b" containerName="barbican-db-sync" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.110213 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.112841 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-worker-config-data" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.113209 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-config-data" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.113837 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-barbican-dockercfg-67nfm" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.127680 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4"] Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.153073 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c"] Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.154542 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.158173 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c"] Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.159889 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-keystone-listener-config-data" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.246577 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk"] Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.247899 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.249673 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-api-config-data" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.256868 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk"] Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.266181 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbm27\" (UniqueName: \"kubernetes.io/projected/68b5ec47-2fd9-4db7-97da-e216d788f047-kube-api-access-nbm27\") pod \"barbican-keystone-listener-56bf966488-wb74c\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.266283 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts8r2\" (UniqueName: \"kubernetes.io/projected/3f26b869-52a1-48c3-9c08-ab841aa265ed-kube-api-access-ts8r2\") pod \"barbican-worker-6c6bcc9bcc-zq8s4\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.266337 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68b5ec47-2fd9-4db7-97da-e216d788f047-logs\") pod \"barbican-keystone-listener-56bf966488-wb74c\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.266372 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b5ec47-2fd9-4db7-97da-e216d788f047-config-data\") pod \"barbican-keystone-listener-56bf966488-wb74c\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.266399 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f26b869-52a1-48c3-9c08-ab841aa265ed-config-data\") pod \"barbican-worker-6c6bcc9bcc-zq8s4\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.266469 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f26b869-52a1-48c3-9c08-ab841aa265ed-logs\") pod \"barbican-worker-6c6bcc9bcc-zq8s4\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.266502 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68b5ec47-2fd9-4db7-97da-e216d788f047-config-data-custom\") pod \"barbican-keystone-listener-56bf966488-wb74c\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.266532 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f26b869-52a1-48c3-9c08-ab841aa265ed-config-data-custom\") pod \"barbican-worker-6c6bcc9bcc-zq8s4\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368013 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts8r2\" (UniqueName: \"kubernetes.io/projected/3f26b869-52a1-48c3-9c08-ab841aa265ed-kube-api-access-ts8r2\") pod \"barbican-worker-6c6bcc9bcc-zq8s4\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368093 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68b5ec47-2fd9-4db7-97da-e216d788f047-logs\") pod \"barbican-keystone-listener-56bf966488-wb74c\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368173 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b5ec47-2fd9-4db7-97da-e216d788f047-config-data\") pod \"barbican-keystone-listener-56bf966488-wb74c\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368199 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f26b869-52a1-48c3-9c08-ab841aa265ed-config-data\") pod \"barbican-worker-6c6bcc9bcc-zq8s4\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368227 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-config-data\") pod \"barbican-api-86b9f45784-2t2nk\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368264 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87kb7\" (UniqueName: \"kubernetes.io/projected/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-kube-api-access-87kb7\") pod \"barbican-api-86b9f45784-2t2nk\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368303 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f26b869-52a1-48c3-9c08-ab841aa265ed-logs\") pod \"barbican-worker-6c6bcc9bcc-zq8s4\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368326 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68b5ec47-2fd9-4db7-97da-e216d788f047-config-data-custom\") pod \"barbican-keystone-listener-56bf966488-wb74c\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368352 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f26b869-52a1-48c3-9c08-ab841aa265ed-config-data-custom\") pod \"barbican-worker-6c6bcc9bcc-zq8s4\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368388 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbm27\" (UniqueName: \"kubernetes.io/projected/68b5ec47-2fd9-4db7-97da-e216d788f047-kube-api-access-nbm27\") pod \"barbican-keystone-listener-56bf966488-wb74c\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368431 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-logs\") pod \"barbican-api-86b9f45784-2t2nk\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368472 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-config-data-custom\") pod \"barbican-api-86b9f45784-2t2nk\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.368869 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68b5ec47-2fd9-4db7-97da-e216d788f047-logs\") pod \"barbican-keystone-listener-56bf966488-wb74c\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.369059 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f26b869-52a1-48c3-9c08-ab841aa265ed-logs\") pod \"barbican-worker-6c6bcc9bcc-zq8s4\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.374100 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b5ec47-2fd9-4db7-97da-e216d788f047-config-data\") pod \"barbican-keystone-listener-56bf966488-wb74c\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.374581 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68b5ec47-2fd9-4db7-97da-e216d788f047-config-data-custom\") pod \"barbican-keystone-listener-56bf966488-wb74c\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.375098 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f26b869-52a1-48c3-9c08-ab841aa265ed-config-data\") pod \"barbican-worker-6c6bcc9bcc-zq8s4\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.379050 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f26b869-52a1-48c3-9c08-ab841aa265ed-config-data-custom\") pod \"barbican-worker-6c6bcc9bcc-zq8s4\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.391718 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts8r2\" (UniqueName: \"kubernetes.io/projected/3f26b869-52a1-48c3-9c08-ab841aa265ed-kube-api-access-ts8r2\") pod \"barbican-worker-6c6bcc9bcc-zq8s4\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.393244 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbm27\" (UniqueName: \"kubernetes.io/projected/68b5ec47-2fd9-4db7-97da-e216d788f047-kube-api-access-nbm27\") pod \"barbican-keystone-listener-56bf966488-wb74c\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.427319 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.469842 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.470765 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-logs\") pod \"barbican-api-86b9f45784-2t2nk\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.470819 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-config-data-custom\") pod \"barbican-api-86b9f45784-2t2nk\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.470874 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-config-data\") pod \"barbican-api-86b9f45784-2t2nk\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.470904 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87kb7\" (UniqueName: \"kubernetes.io/projected/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-kube-api-access-87kb7\") pod \"barbican-api-86b9f45784-2t2nk\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.471218 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-logs\") pod \"barbican-api-86b9f45784-2t2nk\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.474547 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-config-data-custom\") pod \"barbican-api-86b9f45784-2t2nk\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.475109 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-config-data\") pod \"barbican-api-86b9f45784-2t2nk\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.490588 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87kb7\" (UniqueName: \"kubernetes.io/projected/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-kube-api-access-87kb7\") pod \"barbican-api-86b9f45784-2t2nk\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.566415 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.647116 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g"] Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.648220 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.657119 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g"] Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.679945 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78772190-130d-40e4-8983-4659dd08d151-config-data\") pod \"barbican-api-86b9f45784-jwz5g\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.680033 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78772190-130d-40e4-8983-4659dd08d151-config-data-custom\") pod \"barbican-api-86b9f45784-jwz5g\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.680171 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78772190-130d-40e4-8983-4659dd08d151-logs\") pod \"barbican-api-86b9f45784-jwz5g\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.680250 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrvg6\" (UniqueName: \"kubernetes.io/projected/78772190-130d-40e4-8983-4659dd08d151-kube-api-access-mrvg6\") pod \"barbican-api-86b9f45784-jwz5g\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.781616 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrvg6\" (UniqueName: \"kubernetes.io/projected/78772190-130d-40e4-8983-4659dd08d151-kube-api-access-mrvg6\") pod \"barbican-api-86b9f45784-jwz5g\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.781692 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78772190-130d-40e4-8983-4659dd08d151-config-data\") pod \"barbican-api-86b9f45784-jwz5g\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.781761 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78772190-130d-40e4-8983-4659dd08d151-config-data-custom\") pod \"barbican-api-86b9f45784-jwz5g\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.781781 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78772190-130d-40e4-8983-4659dd08d151-logs\") pod \"barbican-api-86b9f45784-jwz5g\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.782230 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78772190-130d-40e4-8983-4659dd08d151-logs\") pod \"barbican-api-86b9f45784-jwz5g\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.786784 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78772190-130d-40e4-8983-4659dd08d151-config-data-custom\") pod \"barbican-api-86b9f45784-jwz5g\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.787856 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78772190-130d-40e4-8983-4659dd08d151-config-data\") pod \"barbican-api-86b9f45784-jwz5g\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.816945 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrvg6\" (UniqueName: \"kubernetes.io/projected/78772190-130d-40e4-8983-4659dd08d151-kube-api-access-mrvg6\") pod \"barbican-api-86b9f45784-jwz5g\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.842774 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln"] Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.852055 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.882881 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grd6f\" (UniqueName: \"kubernetes.io/projected/0746dec1-aec8-4f03-b098-83b54f42b016-kube-api-access-grd6f\") pod \"barbican-keystone-listener-56bf966488-qfwln\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.882959 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0746dec1-aec8-4f03-b098-83b54f42b016-config-data\") pod \"barbican-keystone-listener-56bf966488-qfwln\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.882982 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0746dec1-aec8-4f03-b098-83b54f42b016-logs\") pod \"barbican-keystone-listener-56bf966488-qfwln\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.883021 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0746dec1-aec8-4f03-b098-83b54f42b016-config-data-custom\") pod \"barbican-keystone-listener-56bf966488-qfwln\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.913720 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln"] Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.977562 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4"] Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.985703 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.990209 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0746dec1-aec8-4f03-b098-83b54f42b016-config-data\") pod \"barbican-keystone-listener-56bf966488-qfwln\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.990246 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0746dec1-aec8-4f03-b098-83b54f42b016-logs\") pod \"barbican-keystone-listener-56bf966488-qfwln\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.990298 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0746dec1-aec8-4f03-b098-83b54f42b016-config-data-custom\") pod \"barbican-keystone-listener-56bf966488-qfwln\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.990391 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grd6f\" (UniqueName: \"kubernetes.io/projected/0746dec1-aec8-4f03-b098-83b54f42b016-kube-api-access-grd6f\") pod \"barbican-keystone-listener-56bf966488-qfwln\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.991575 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0746dec1-aec8-4f03-b098-83b54f42b016-logs\") pod \"barbican-keystone-listener-56bf966488-qfwln\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.994174 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0746dec1-aec8-4f03-b098-83b54f42b016-config-data-custom\") pod \"barbican-keystone-listener-56bf966488-qfwln\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:56 crc kubenswrapper[4746]: I0103 03:33:56.994903 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0746dec1-aec8-4f03-b098-83b54f42b016-config-data\") pod \"barbican-keystone-listener-56bf966488-qfwln\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.029711 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb"] Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.032868 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: W0103 03:33:57.036880 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68b5ec47_2fd9_4db7_97da_e216d788f047.slice/crio-59b6e7a6d83a46d32926b27ffb7d093e52fedfcb72578e28ba5ac4b8b9221331 WatchSource:0}: Error finding container 59b6e7a6d83a46d32926b27ffb7d093e52fedfcb72578e28ba5ac4b8b9221331: Status 404 returned error can't find the container with id 59b6e7a6d83a46d32926b27ffb7d093e52fedfcb72578e28ba5ac4b8b9221331 Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.038244 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grd6f\" (UniqueName: \"kubernetes.io/projected/0746dec1-aec8-4f03-b098-83b54f42b016-kube-api-access-grd6f\") pod \"barbican-keystone-listener-56bf966488-qfwln\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.049091 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c"] Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.056362 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb"] Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.091556 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fd585e-061e-4e8a-976e-62bf6ea59f0d-config-data-custom\") pod \"barbican-worker-6c6bcc9bcc-ngfdb\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.091617 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-695lm\" (UniqueName: \"kubernetes.io/projected/08fd585e-061e-4e8a-976e-62bf6ea59f0d-kube-api-access-695lm\") pod \"barbican-worker-6c6bcc9bcc-ngfdb\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.091700 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fd585e-061e-4e8a-976e-62bf6ea59f0d-config-data\") pod \"barbican-worker-6c6bcc9bcc-ngfdb\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.091731 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fd585e-061e-4e8a-976e-62bf6ea59f0d-logs\") pod \"barbican-worker-6c6bcc9bcc-ngfdb\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.109919 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk"] Jan 03 03:33:57 crc kubenswrapper[4746]: W0103 03:33:57.122669 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ac36e48_abb6_4945_a1a4_bdd18f7cf129.slice/crio-0ed171b8e2b7e28f9f14ab45090bffacf98c8d5bfb74fe7bcaefc0ef1fc69d5d WatchSource:0}: Error finding container 0ed171b8e2b7e28f9f14ab45090bffacf98c8d5bfb74fe7bcaefc0ef1fc69d5d: Status 404 returned error can't find the container with id 0ed171b8e2b7e28f9f14ab45090bffacf98c8d5bfb74fe7bcaefc0ef1fc69d5d Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.176834 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.193194 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-695lm\" (UniqueName: \"kubernetes.io/projected/08fd585e-061e-4e8a-976e-62bf6ea59f0d-kube-api-access-695lm\") pod \"barbican-worker-6c6bcc9bcc-ngfdb\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.193541 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fd585e-061e-4e8a-976e-62bf6ea59f0d-config-data\") pod \"barbican-worker-6c6bcc9bcc-ngfdb\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.193573 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fd585e-061e-4e8a-976e-62bf6ea59f0d-logs\") pod \"barbican-worker-6c6bcc9bcc-ngfdb\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.193603 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fd585e-061e-4e8a-976e-62bf6ea59f0d-config-data-custom\") pod \"barbican-worker-6c6bcc9bcc-ngfdb\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.194409 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fd585e-061e-4e8a-976e-62bf6ea59f0d-logs\") pod \"barbican-worker-6c6bcc9bcc-ngfdb\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.199393 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fd585e-061e-4e8a-976e-62bf6ea59f0d-config-data-custom\") pod \"barbican-worker-6c6bcc9bcc-ngfdb\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.199798 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fd585e-061e-4e8a-976e-62bf6ea59f0d-config-data\") pod \"barbican-worker-6c6bcc9bcc-ngfdb\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.209568 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-695lm\" (UniqueName: \"kubernetes.io/projected/08fd585e-061e-4e8a-976e-62bf6ea59f0d-kube-api-access-695lm\") pod \"barbican-worker-6c6bcc9bcc-ngfdb\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.368149 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.457032 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g"] Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.602859 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln"] Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.789322 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb"] Jan 03 03:33:57 crc kubenswrapper[4746]: W0103 03:33:57.793116 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08fd585e_061e_4e8a_976e_62bf6ea59f0d.slice/crio-519bbe0f1e42f74ec415808c1b7d1510894a04a62554de28bbfa314132455da8 WatchSource:0}: Error finding container 519bbe0f1e42f74ec415808c1b7d1510894a04a62554de28bbfa314132455da8: Status 404 returned error can't find the container with id 519bbe0f1e42f74ec415808c1b7d1510894a04a62554de28bbfa314132455da8 Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.793178 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" event={"ID":"3f26b869-52a1-48c3-9c08-ab841aa265ed","Type":"ContainerStarted","Data":"4585c12a380098adebb5572273def231496956df4faced2dffab29b7e0860731"} Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.794455 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" event={"ID":"68b5ec47-2fd9-4db7-97da-e216d788f047","Type":"ContainerStarted","Data":"59b6e7a6d83a46d32926b27ffb7d093e52fedfcb72578e28ba5ac4b8b9221331"} Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.796326 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" event={"ID":"78772190-130d-40e4-8983-4659dd08d151","Type":"ContainerStarted","Data":"006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178"} Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.796374 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" event={"ID":"78772190-130d-40e4-8983-4659dd08d151","Type":"ContainerStarted","Data":"f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c"} Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.796386 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" event={"ID":"78772190-130d-40e4-8983-4659dd08d151","Type":"ContainerStarted","Data":"3645fe056037f6993ec9c708b14ec6c9add31e41ed527cb8156be95c03f925fb"} Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.796414 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.801546 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" event={"ID":"8ac36e48-abb6-4945-a1a4-bdd18f7cf129","Type":"ContainerStarted","Data":"6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9"} Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.801587 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.801602 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" event={"ID":"8ac36e48-abb6-4945-a1a4-bdd18f7cf129","Type":"ContainerStarted","Data":"24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2"} Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.801613 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" event={"ID":"8ac36e48-abb6-4945-a1a4-bdd18f7cf129","Type":"ContainerStarted","Data":"0ed171b8e2b7e28f9f14ab45090bffacf98c8d5bfb74fe7bcaefc0ef1fc69d5d"} Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.801627 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.803389 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" event={"ID":"0746dec1-aec8-4f03-b098-83b54f42b016","Type":"ContainerStarted","Data":"bbec779eb5875206601d55f9b8d8a051da61eeb2f24a09dfee88eb0304fbe4f9"} Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.822731 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" podStartSLOduration=1.822707713 podStartE2EDuration="1.822707713s" podCreationTimestamp="2026-01-03 03:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:33:57.813614771 +0000 UTC m=+1157.663505086" watchObservedRunningTime="2026-01-03 03:33:57.822707713 +0000 UTC m=+1157.672598018" Jan 03 03:33:57 crc kubenswrapper[4746]: I0103 03:33:57.862034 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" podStartSLOduration=1.862014075 podStartE2EDuration="1.862014075s" podCreationTimestamp="2026-01-03 03:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:33:57.843444851 +0000 UTC m=+1157.693335156" watchObservedRunningTime="2026-01-03 03:33:57.862014075 +0000 UTC m=+1157.711904380" Jan 03 03:33:58 crc kubenswrapper[4746]: I0103 03:33:58.169545 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g"] Jan 03 03:33:58 crc kubenswrapper[4746]: I0103 03:33:58.354440 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c"] Jan 03 03:33:58 crc kubenswrapper[4746]: I0103 03:33:58.535485 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4"] Jan 03 03:33:58 crc kubenswrapper[4746]: I0103 03:33:58.812790 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" event={"ID":"08fd585e-061e-4e8a-976e-62bf6ea59f0d","Type":"ContainerStarted","Data":"519bbe0f1e42f74ec415808c1b7d1510894a04a62554de28bbfa314132455da8"} Jan 03 03:33:58 crc kubenswrapper[4746]: I0103 03:33:58.813348 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.659052 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk"] Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.824193 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" event={"ID":"08fd585e-061e-4e8a-976e-62bf6ea59f0d","Type":"ContainerStarted","Data":"5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c"} Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.824245 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" event={"ID":"08fd585e-061e-4e8a-976e-62bf6ea59f0d","Type":"ContainerStarted","Data":"c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6"} Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.825987 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" event={"ID":"0746dec1-aec8-4f03-b098-83b54f42b016","Type":"ContainerStarted","Data":"922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff"} Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.826019 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" event={"ID":"0746dec1-aec8-4f03-b098-83b54f42b016","Type":"ContainerStarted","Data":"155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c"} Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.827538 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" event={"ID":"3f26b869-52a1-48c3-9c08-ab841aa265ed","Type":"ContainerStarted","Data":"ffa86cf3a73df6b34f8d86eb97c27d43d6e75b6b7f3ea19a7af5a3b88afd6d9f"} Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.827583 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" event={"ID":"3f26b869-52a1-48c3-9c08-ab841aa265ed","Type":"ContainerStarted","Data":"0fa0008a11adce4cabe3778c96418ba97bc03a27e34af51bfc9bd4bf2294c880"} Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.827587 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" podUID="3f26b869-52a1-48c3-9c08-ab841aa265ed" containerName="barbican-worker-log" containerID="cri-o://0fa0008a11adce4cabe3778c96418ba97bc03a27e34af51bfc9bd4bf2294c880" gracePeriod=30 Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.827592 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" podUID="3f26b869-52a1-48c3-9c08-ab841aa265ed" containerName="barbican-worker" containerID="cri-o://ffa86cf3a73df6b34f8d86eb97c27d43d6e75b6b7f3ea19a7af5a3b88afd6d9f" gracePeriod=30 Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.834267 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" event={"ID":"68b5ec47-2fd9-4db7-97da-e216d788f047","Type":"ContainerStarted","Data":"4f24c99ecebafecd1daa2fa676468242ce1697973c08c0761bb0bb7ca70c244a"} Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.834552 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" event={"ID":"68b5ec47-2fd9-4db7-97da-e216d788f047","Type":"ContainerStarted","Data":"16ae517f1177bad321fbcf029c22bc10fdafcd5ddb3eb99beeb467fa802971ee"} Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.834377 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" podUID="68b5ec47-2fd9-4db7-97da-e216d788f047" containerName="barbican-keystone-listener" containerID="cri-o://4f24c99ecebafecd1daa2fa676468242ce1697973c08c0761bb0bb7ca70c244a" gracePeriod=30 Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.834546 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" podUID="78772190-130d-40e4-8983-4659dd08d151" containerName="barbican-api-log" containerID="cri-o://f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c" gracePeriod=30 Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.834377 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" podUID="68b5ec47-2fd9-4db7-97da-e216d788f047" containerName="barbican-keystone-listener-log" containerID="cri-o://16ae517f1177bad321fbcf029c22bc10fdafcd5ddb3eb99beeb467fa802971ee" gracePeriod=30 Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.834568 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" podUID="78772190-130d-40e4-8983-4659dd08d151" containerName="barbican-api" containerID="cri-o://006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178" gracePeriod=30 Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.834644 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" podUID="8ac36e48-abb6-4945-a1a4-bdd18f7cf129" containerName="barbican-api-log" containerID="cri-o://24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2" gracePeriod=30 Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.834690 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" podUID="8ac36e48-abb6-4945-a1a4-bdd18f7cf129" containerName="barbican-api" containerID="cri-o://6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9" gracePeriod=30 Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.857461 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" podStartSLOduration=2.706203914 podStartE2EDuration="3.857441744s" podCreationTimestamp="2026-01-03 03:33:56 +0000 UTC" firstStartedPulling="2026-01-03 03:33:57.79639314 +0000 UTC m=+1157.646283455" lastFinishedPulling="2026-01-03 03:33:58.94763097 +0000 UTC m=+1158.797521285" observedRunningTime="2026-01-03 03:33:59.852426871 +0000 UTC m=+1159.702317176" watchObservedRunningTime="2026-01-03 03:33:59.857441744 +0000 UTC m=+1159.707332049" Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.900857 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" podStartSLOduration=1.940175666 podStartE2EDuration="3.900834045s" podCreationTimestamp="2026-01-03 03:33:56 +0000 UTC" firstStartedPulling="2026-01-03 03:33:56.986771066 +0000 UTC m=+1156.836661371" lastFinishedPulling="2026-01-03 03:33:58.947429445 +0000 UTC m=+1158.797319750" observedRunningTime="2026-01-03 03:33:59.888970565 +0000 UTC m=+1159.738860870" watchObservedRunningTime="2026-01-03 03:33:59.900834045 +0000 UTC m=+1159.750724350" Jan 03 03:33:59 crc kubenswrapper[4746]: I0103 03:33:59.953429 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" podStartSLOduration=2.051530021 podStartE2EDuration="3.953411912s" podCreationTimestamp="2026-01-03 03:33:56 +0000 UTC" firstStartedPulling="2026-01-03 03:33:57.044748674 +0000 UTC m=+1156.894638979" lastFinishedPulling="2026-01-03 03:33:58.946630555 +0000 UTC m=+1158.796520870" observedRunningTime="2026-01-03 03:33:59.940347182 +0000 UTC m=+1159.790237487" watchObservedRunningTime="2026-01-03 03:33:59.953411912 +0000 UTC m=+1159.803302217" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.000159 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" podStartSLOduration=2.664912303 podStartE2EDuration="4.000143794s" podCreationTimestamp="2026-01-03 03:33:56 +0000 UTC" firstStartedPulling="2026-01-03 03:33:57.612400029 +0000 UTC m=+1157.462290334" lastFinishedPulling="2026-01-03 03:33:58.94763152 +0000 UTC m=+1158.797521825" observedRunningTime="2026-01-03 03:33:59.999637172 +0000 UTC m=+1159.849527487" watchObservedRunningTime="2026-01-03 03:34:00.000143794 +0000 UTC m=+1159.850034109" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.091013 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln"] Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.149975 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb"] Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.710050 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.726265 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.843801 4746 generic.go:334] "Generic (PLEG): container finished" podID="68b5ec47-2fd9-4db7-97da-e216d788f047" containerID="16ae517f1177bad321fbcf029c22bc10fdafcd5ddb3eb99beeb467fa802971ee" exitCode=143 Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.843863 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" event={"ID":"68b5ec47-2fd9-4db7-97da-e216d788f047","Type":"ContainerDied","Data":"16ae517f1177bad321fbcf029c22bc10fdafcd5ddb3eb99beeb467fa802971ee"} Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.846158 4746 generic.go:334] "Generic (PLEG): container finished" podID="78772190-130d-40e4-8983-4659dd08d151" containerID="006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178" exitCode=0 Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.846180 4746 generic.go:334] "Generic (PLEG): container finished" podID="78772190-130d-40e4-8983-4659dd08d151" containerID="f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c" exitCode=143 Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.846210 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" event={"ID":"78772190-130d-40e4-8983-4659dd08d151","Type":"ContainerDied","Data":"006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178"} Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.846239 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" event={"ID":"78772190-130d-40e4-8983-4659dd08d151","Type":"ContainerDied","Data":"f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c"} Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.846250 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" event={"ID":"78772190-130d-40e4-8983-4659dd08d151","Type":"ContainerDied","Data":"3645fe056037f6993ec9c708b14ec6c9add31e41ed527cb8156be95c03f925fb"} Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.846265 4746 scope.go:117] "RemoveContainer" containerID="006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.846399 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.851110 4746 generic.go:334] "Generic (PLEG): container finished" podID="8ac36e48-abb6-4945-a1a4-bdd18f7cf129" containerID="6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9" exitCode=0 Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.851154 4746 generic.go:334] "Generic (PLEG): container finished" podID="8ac36e48-abb6-4945-a1a4-bdd18f7cf129" containerID="24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2" exitCode=143 Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.851239 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" event={"ID":"8ac36e48-abb6-4945-a1a4-bdd18f7cf129","Type":"ContainerDied","Data":"6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9"} Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.851274 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" event={"ID":"8ac36e48-abb6-4945-a1a4-bdd18f7cf129","Type":"ContainerDied","Data":"24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2"} Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.851292 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" event={"ID":"8ac36e48-abb6-4945-a1a4-bdd18f7cf129","Type":"ContainerDied","Data":"0ed171b8e2b7e28f9f14ab45090bffacf98c8d5bfb74fe7bcaefc0ef1fc69d5d"} Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.851396 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.855624 4746 generic.go:334] "Generic (PLEG): container finished" podID="3f26b869-52a1-48c3-9c08-ab841aa265ed" containerID="0fa0008a11adce4cabe3778c96418ba97bc03a27e34af51bfc9bd4bf2294c880" exitCode=143 Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.855726 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" event={"ID":"3f26b869-52a1-48c3-9c08-ab841aa265ed","Type":"ContainerDied","Data":"0fa0008a11adce4cabe3778c96418ba97bc03a27e34af51bfc9bd4bf2294c880"} Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.869620 4746 scope.go:117] "RemoveContainer" containerID="f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.872382 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78772190-130d-40e4-8983-4659dd08d151-logs\") pod \"78772190-130d-40e4-8983-4659dd08d151\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.872515 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-config-data-custom\") pod \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.872579 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78772190-130d-40e4-8983-4659dd08d151-config-data-custom\") pod \"78772190-130d-40e4-8983-4659dd08d151\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.872629 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-config-data\") pod \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.872684 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78772190-130d-40e4-8983-4659dd08d151-config-data\") pod \"78772190-130d-40e4-8983-4659dd08d151\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.872823 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrvg6\" (UniqueName: \"kubernetes.io/projected/78772190-130d-40e4-8983-4659dd08d151-kube-api-access-mrvg6\") pod \"78772190-130d-40e4-8983-4659dd08d151\" (UID: \"78772190-130d-40e4-8983-4659dd08d151\") " Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.872870 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-logs\") pod \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.872913 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87kb7\" (UniqueName: \"kubernetes.io/projected/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-kube-api-access-87kb7\") pod \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\" (UID: \"8ac36e48-abb6-4945-a1a4-bdd18f7cf129\") " Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.874263 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78772190-130d-40e4-8983-4659dd08d151-logs" (OuterVolumeSpecName: "logs") pod "78772190-130d-40e4-8983-4659dd08d151" (UID: "78772190-130d-40e4-8983-4659dd08d151"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.875233 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-logs" (OuterVolumeSpecName: "logs") pod "8ac36e48-abb6-4945-a1a4-bdd18f7cf129" (UID: "8ac36e48-abb6-4945-a1a4-bdd18f7cf129"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.880344 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8ac36e48-abb6-4945-a1a4-bdd18f7cf129" (UID: "8ac36e48-abb6-4945-a1a4-bdd18f7cf129"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.881321 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78772190-130d-40e4-8983-4659dd08d151-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "78772190-130d-40e4-8983-4659dd08d151" (UID: "78772190-130d-40e4-8983-4659dd08d151"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.881495 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-kube-api-access-87kb7" (OuterVolumeSpecName: "kube-api-access-87kb7") pod "8ac36e48-abb6-4945-a1a4-bdd18f7cf129" (UID: "8ac36e48-abb6-4945-a1a4-bdd18f7cf129"). InnerVolumeSpecName "kube-api-access-87kb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.892412 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78772190-130d-40e4-8983-4659dd08d151-kube-api-access-mrvg6" (OuterVolumeSpecName: "kube-api-access-mrvg6") pod "78772190-130d-40e4-8983-4659dd08d151" (UID: "78772190-130d-40e4-8983-4659dd08d151"). InnerVolumeSpecName "kube-api-access-mrvg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.913890 4746 scope.go:117] "RemoveContainer" containerID="006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178" Jan 03 03:34:00 crc kubenswrapper[4746]: E0103 03:34:00.915110 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178\": container with ID starting with 006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178 not found: ID does not exist" containerID="006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.915171 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178"} err="failed to get container status \"006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178\": rpc error: code = NotFound desc = could not find container \"006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178\": container with ID starting with 006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178 not found: ID does not exist" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.915197 4746 scope.go:117] "RemoveContainer" containerID="f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c" Jan 03 03:34:00 crc kubenswrapper[4746]: E0103 03:34:00.915552 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c\": container with ID starting with f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c not found: ID does not exist" containerID="f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.915598 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c"} err="failed to get container status \"f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c\": rpc error: code = NotFound desc = could not find container \"f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c\": container with ID starting with f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c not found: ID does not exist" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.915625 4746 scope.go:117] "RemoveContainer" containerID="006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.915884 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178"} err="failed to get container status \"006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178\": rpc error: code = NotFound desc = could not find container \"006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178\": container with ID starting with 006387129bce848490f536895d6c24c5298e5b850bebeb9f7242d7626d68e178 not found: ID does not exist" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.915920 4746 scope.go:117] "RemoveContainer" containerID="f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.916344 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c"} err="failed to get container status \"f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c\": rpc error: code = NotFound desc = could not find container \"f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c\": container with ID starting with f1f987b52942f51410f945ae68bd3cc69e811246dccd07fe4ae93d784b08469c not found: ID does not exist" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.916374 4746 scope.go:117] "RemoveContainer" containerID="6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.937036 4746 scope.go:117] "RemoveContainer" containerID="24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.943614 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-config-data" (OuterVolumeSpecName: "config-data") pod "8ac36e48-abb6-4945-a1a4-bdd18f7cf129" (UID: "8ac36e48-abb6-4945-a1a4-bdd18f7cf129"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.951501 4746 scope.go:117] "RemoveContainer" containerID="6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9" Jan 03 03:34:00 crc kubenswrapper[4746]: E0103 03:34:00.952142 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9\": container with ID starting with 6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9 not found: ID does not exist" containerID="6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.952184 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9"} err="failed to get container status \"6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9\": rpc error: code = NotFound desc = could not find container \"6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9\": container with ID starting with 6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9 not found: ID does not exist" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.952211 4746 scope.go:117] "RemoveContainer" containerID="24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2" Jan 03 03:34:00 crc kubenswrapper[4746]: E0103 03:34:00.953090 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2\": container with ID starting with 24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2 not found: ID does not exist" containerID="24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.953121 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2"} err="failed to get container status \"24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2\": rpc error: code = NotFound desc = could not find container \"24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2\": container with ID starting with 24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2 not found: ID does not exist" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.953139 4746 scope.go:117] "RemoveContainer" containerID="6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.953351 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9"} err="failed to get container status \"6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9\": rpc error: code = NotFound desc = could not find container \"6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9\": container with ID starting with 6e1e97d914239aa3679ba65d221d6e9a0e067ae99a831c95ddbb0a37c2bc2ef9 not found: ID does not exist" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.953371 4746 scope.go:117] "RemoveContainer" containerID="24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.953547 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2"} err="failed to get container status \"24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2\": rpc error: code = NotFound desc = could not find container \"24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2\": container with ID starting with 24189353fd40b21e167b48b7de4cd3f9285e76b3c32314c8b4173f853e66b1c2 not found: ID does not exist" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.956599 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78772190-130d-40e4-8983-4659dd08d151-config-data" (OuterVolumeSpecName: "config-data") pod "78772190-130d-40e4-8983-4659dd08d151" (UID: "78772190-130d-40e4-8983-4659dd08d151"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.982706 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78772190-130d-40e4-8983-4659dd08d151-logs\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.982733 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.982744 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78772190-130d-40e4-8983-4659dd08d151-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.982752 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.982762 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78772190-130d-40e4-8983-4659dd08d151-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.982772 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrvg6\" (UniqueName: \"kubernetes.io/projected/78772190-130d-40e4-8983-4659dd08d151-kube-api-access-mrvg6\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.982781 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-logs\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:00 crc kubenswrapper[4746]: I0103 03:34:00.982790 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87kb7\" (UniqueName: \"kubernetes.io/projected/8ac36e48-abb6-4945-a1a4-bdd18f7cf129-kube-api-access-87kb7\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.182812 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g"] Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.188927 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-api-86b9f45784-jwz5g"] Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.200024 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk"] Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.206161 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-api-86b9f45784-2t2nk"] Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.264970 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-db-sync-v56r7"] Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.271699 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-db-sync-v56r7"] Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.334026 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican418b-account-delete-2jnqf"] Jan 03 03:34:01 crc kubenswrapper[4746]: E0103 03:34:01.334387 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac36e48-abb6-4945-a1a4-bdd18f7cf129" containerName="barbican-api" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.334408 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac36e48-abb6-4945-a1a4-bdd18f7cf129" containerName="barbican-api" Jan 03 03:34:01 crc kubenswrapper[4746]: E0103 03:34:01.334422 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78772190-130d-40e4-8983-4659dd08d151" containerName="barbican-api" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.334431 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="78772190-130d-40e4-8983-4659dd08d151" containerName="barbican-api" Jan 03 03:34:01 crc kubenswrapper[4746]: E0103 03:34:01.334450 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac36e48-abb6-4945-a1a4-bdd18f7cf129" containerName="barbican-api-log" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.334459 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac36e48-abb6-4945-a1a4-bdd18f7cf129" containerName="barbican-api-log" Jan 03 03:34:01 crc kubenswrapper[4746]: E0103 03:34:01.334475 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78772190-130d-40e4-8983-4659dd08d151" containerName="barbican-api-log" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.334483 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="78772190-130d-40e4-8983-4659dd08d151" containerName="barbican-api-log" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.334628 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac36e48-abb6-4945-a1a4-bdd18f7cf129" containerName="barbican-api-log" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.334649 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac36e48-abb6-4945-a1a4-bdd18f7cf129" containerName="barbican-api" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.334678 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="78772190-130d-40e4-8983-4659dd08d151" containerName="barbican-api-log" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.334690 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="78772190-130d-40e4-8983-4659dd08d151" containerName="barbican-api" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.335325 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.346786 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican418b-account-delete-2jnqf"] Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.494699 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9967785a-e58e-4417-bf8b-ed210f4ff865-operator-scripts\") pod \"barbican418b-account-delete-2jnqf\" (UID: \"9967785a-e58e-4417-bf8b-ed210f4ff865\") " pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.494770 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmq67\" (UniqueName: \"kubernetes.io/projected/9967785a-e58e-4417-bf8b-ed210f4ff865-kube-api-access-xmq67\") pod \"barbican418b-account-delete-2jnqf\" (UID: \"9967785a-e58e-4417-bf8b-ed210f4ff865\") " pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.596610 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9967785a-e58e-4417-bf8b-ed210f4ff865-operator-scripts\") pod \"barbican418b-account-delete-2jnqf\" (UID: \"9967785a-e58e-4417-bf8b-ed210f4ff865\") " pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.596706 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmq67\" (UniqueName: \"kubernetes.io/projected/9967785a-e58e-4417-bf8b-ed210f4ff865-kube-api-access-xmq67\") pod \"barbican418b-account-delete-2jnqf\" (UID: \"9967785a-e58e-4417-bf8b-ed210f4ff865\") " pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.597870 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9967785a-e58e-4417-bf8b-ed210f4ff865-operator-scripts\") pod \"barbican418b-account-delete-2jnqf\" (UID: \"9967785a-e58e-4417-bf8b-ed210f4ff865\") " pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.615803 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmq67\" (UniqueName: \"kubernetes.io/projected/9967785a-e58e-4417-bf8b-ed210f4ff865-kube-api-access-xmq67\") pod \"barbican418b-account-delete-2jnqf\" (UID: \"9967785a-e58e-4417-bf8b-ed210f4ff865\") " pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.652562 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.882766 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" podUID="0746dec1-aec8-4f03-b098-83b54f42b016" containerName="barbican-keystone-listener-log" containerID="cri-o://155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c" gracePeriod=30 Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.882855 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" podUID="08fd585e-061e-4e8a-976e-62bf6ea59f0d" containerName="barbican-worker-log" containerID="cri-o://c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6" gracePeriod=30 Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.882884 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" podUID="08fd585e-061e-4e8a-976e-62bf6ea59f0d" containerName="barbican-worker" containerID="cri-o://5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c" gracePeriod=30 Jan 03 03:34:01 crc kubenswrapper[4746]: I0103 03:34:01.882899 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" podUID="0746dec1-aec8-4f03-b098-83b54f42b016" containerName="barbican-keystone-listener" containerID="cri-o://922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff" gracePeriod=30 Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.125703 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican418b-account-delete-2jnqf"] Jan 03 03:34:02 crc kubenswrapper[4746]: W0103 03:34:02.135063 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9967785a_e58e_4417_bf8b_ed210f4ff865.slice/crio-2d2e173ae485c48dcba823d57bac37be61ff66391cf5da694f073ae2fa7e96cf WatchSource:0}: Error finding container 2d2e173ae485c48dcba823d57bac37be61ff66391cf5da694f073ae2fa7e96cf: Status 404 returned error can't find the container with id 2d2e173ae485c48dcba823d57bac37be61ff66391cf5da694f073ae2fa7e96cf Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.477047 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c9efa5-01a6-4772-b77a-37d244c2696b" path="/var/lib/kubelet/pods/01c9efa5-01a6-4772-b77a-37d244c2696b/volumes" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.478010 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78772190-130d-40e4-8983-4659dd08d151" path="/var/lib/kubelet/pods/78772190-130d-40e4-8983-4659dd08d151/volumes" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.484950 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac36e48-abb6-4945-a1a4-bdd18f7cf129" path="/var/lib/kubelet/pods/8ac36e48-abb6-4945-a1a4-bdd18f7cf129/volumes" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.795869 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.877919 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.898151 4746 generic.go:334] "Generic (PLEG): container finished" podID="08fd585e-061e-4e8a-976e-62bf6ea59f0d" containerID="5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c" exitCode=0 Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.898182 4746 generic.go:334] "Generic (PLEG): container finished" podID="08fd585e-061e-4e8a-976e-62bf6ea59f0d" containerID="c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6" exitCode=143 Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.898222 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" event={"ID":"08fd585e-061e-4e8a-976e-62bf6ea59f0d","Type":"ContainerDied","Data":"5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c"} Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.898250 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" event={"ID":"08fd585e-061e-4e8a-976e-62bf6ea59f0d","Type":"ContainerDied","Data":"c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6"} Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.898259 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" event={"ID":"08fd585e-061e-4e8a-976e-62bf6ea59f0d","Type":"ContainerDied","Data":"519bbe0f1e42f74ec415808c1b7d1510894a04a62554de28bbfa314132455da8"} Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.898275 4746 scope.go:117] "RemoveContainer" containerID="5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.898389 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.900900 4746 generic.go:334] "Generic (PLEG): container finished" podID="0746dec1-aec8-4f03-b098-83b54f42b016" containerID="922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff" exitCode=0 Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.900923 4746 generic.go:334] "Generic (PLEG): container finished" podID="0746dec1-aec8-4f03-b098-83b54f42b016" containerID="155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c" exitCode=143 Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.900967 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" event={"ID":"0746dec1-aec8-4f03-b098-83b54f42b016","Type":"ContainerDied","Data":"922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff"} Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.900992 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" event={"ID":"0746dec1-aec8-4f03-b098-83b54f42b016","Type":"ContainerDied","Data":"155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c"} Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.901009 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" event={"ID":"0746dec1-aec8-4f03-b098-83b54f42b016","Type":"ContainerDied","Data":"bbec779eb5875206601d55f9b8d8a051da61eeb2f24a09dfee88eb0304fbe4f9"} Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.901067 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.903345 4746 generic.go:334] "Generic (PLEG): container finished" podID="9967785a-e58e-4417-bf8b-ed210f4ff865" containerID="f60c54dc7f0e59c332cd801e222f1b81506c63f5ce617096f9e1447b568b53a7" exitCode=0 Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.903394 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" event={"ID":"9967785a-e58e-4417-bf8b-ed210f4ff865","Type":"ContainerDied","Data":"f60c54dc7f0e59c332cd801e222f1b81506c63f5ce617096f9e1447b568b53a7"} Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.903416 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" event={"ID":"9967785a-e58e-4417-bf8b-ed210f4ff865","Type":"ContainerStarted","Data":"2d2e173ae485c48dcba823d57bac37be61ff66391cf5da694f073ae2fa7e96cf"} Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.920677 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0746dec1-aec8-4f03-b098-83b54f42b016-logs\") pod \"0746dec1-aec8-4f03-b098-83b54f42b016\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.920863 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0746dec1-aec8-4f03-b098-83b54f42b016-config-data-custom\") pod \"0746dec1-aec8-4f03-b098-83b54f42b016\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.920907 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0746dec1-aec8-4f03-b098-83b54f42b016-config-data\") pod \"0746dec1-aec8-4f03-b098-83b54f42b016\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.920966 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grd6f\" (UniqueName: \"kubernetes.io/projected/0746dec1-aec8-4f03-b098-83b54f42b016-kube-api-access-grd6f\") pod \"0746dec1-aec8-4f03-b098-83b54f42b016\" (UID: \"0746dec1-aec8-4f03-b098-83b54f42b016\") " Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.923046 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0746dec1-aec8-4f03-b098-83b54f42b016-logs" (OuterVolumeSpecName: "logs") pod "0746dec1-aec8-4f03-b098-83b54f42b016" (UID: "0746dec1-aec8-4f03-b098-83b54f42b016"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.925511 4746 scope.go:117] "RemoveContainer" containerID="c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.927406 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0746dec1-aec8-4f03-b098-83b54f42b016-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0746dec1-aec8-4f03-b098-83b54f42b016" (UID: "0746dec1-aec8-4f03-b098-83b54f42b016"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.928736 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0746dec1-aec8-4f03-b098-83b54f42b016-kube-api-access-grd6f" (OuterVolumeSpecName: "kube-api-access-grd6f") pod "0746dec1-aec8-4f03-b098-83b54f42b016" (UID: "0746dec1-aec8-4f03-b098-83b54f42b016"). InnerVolumeSpecName "kube-api-access-grd6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.939670 4746 scope.go:117] "RemoveContainer" containerID="5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c" Jan 03 03:34:02 crc kubenswrapper[4746]: E0103 03:34:02.940136 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c\": container with ID starting with 5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c not found: ID does not exist" containerID="5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.940251 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c"} err="failed to get container status \"5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c\": rpc error: code = NotFound desc = could not find container \"5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c\": container with ID starting with 5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c not found: ID does not exist" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.940340 4746 scope.go:117] "RemoveContainer" containerID="c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6" Jan 03 03:34:02 crc kubenswrapper[4746]: E0103 03:34:02.940853 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6\": container with ID starting with c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6 not found: ID does not exist" containerID="c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.940875 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6"} err="failed to get container status \"c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6\": rpc error: code = NotFound desc = could not find container \"c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6\": container with ID starting with c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6 not found: ID does not exist" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.940896 4746 scope.go:117] "RemoveContainer" containerID="5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.941164 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c"} err="failed to get container status \"5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c\": rpc error: code = NotFound desc = could not find container \"5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c\": container with ID starting with 5566b3dd25e6de8d44faeaedfb44ecdafed05467c32b81796a678df670c0180c not found: ID does not exist" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.941213 4746 scope.go:117] "RemoveContainer" containerID="c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.941479 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6"} err="failed to get container status \"c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6\": rpc error: code = NotFound desc = could not find container \"c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6\": container with ID starting with c243f53efc28fd085a7f3bcd0b77a62779cdaa69acfc36d6c4e99740317117c6 not found: ID does not exist" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.941499 4746 scope.go:117] "RemoveContainer" containerID="922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.954296 4746 scope.go:117] "RemoveContainer" containerID="155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.957355 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0746dec1-aec8-4f03-b098-83b54f42b016-config-data" (OuterVolumeSpecName: "config-data") pod "0746dec1-aec8-4f03-b098-83b54f42b016" (UID: "0746dec1-aec8-4f03-b098-83b54f42b016"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.966923 4746 scope.go:117] "RemoveContainer" containerID="922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff" Jan 03 03:34:02 crc kubenswrapper[4746]: E0103 03:34:02.967301 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff\": container with ID starting with 922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff not found: ID does not exist" containerID="922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.967346 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff"} err="failed to get container status \"922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff\": rpc error: code = NotFound desc = could not find container \"922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff\": container with ID starting with 922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff not found: ID does not exist" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.967371 4746 scope.go:117] "RemoveContainer" containerID="155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c" Jan 03 03:34:02 crc kubenswrapper[4746]: E0103 03:34:02.967614 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c\": container with ID starting with 155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c not found: ID does not exist" containerID="155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.967642 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c"} err="failed to get container status \"155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c\": rpc error: code = NotFound desc = could not find container \"155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c\": container with ID starting with 155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c not found: ID does not exist" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.967672 4746 scope.go:117] "RemoveContainer" containerID="922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.967846 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff"} err="failed to get container status \"922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff\": rpc error: code = NotFound desc = could not find container \"922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff\": container with ID starting with 922d638dab86073da5bb788289f711fe122f4f4d3c287365e95cb9f7fc6a45ff not found: ID does not exist" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.967862 4746 scope.go:117] "RemoveContainer" containerID="155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c" Jan 03 03:34:02 crc kubenswrapper[4746]: I0103 03:34:02.968404 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c"} err="failed to get container status \"155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c\": rpc error: code = NotFound desc = could not find container \"155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c\": container with ID starting with 155bfbaf9a1740e93db7144668f40e334a1ccd7fa1639002d6b26a3fbd14f75c not found: ID does not exist" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.021905 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fd585e-061e-4e8a-976e-62bf6ea59f0d-logs\") pod \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.022034 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fd585e-061e-4e8a-976e-62bf6ea59f0d-config-data\") pod \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.022097 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fd585e-061e-4e8a-976e-62bf6ea59f0d-config-data-custom\") pod \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.022242 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-695lm\" (UniqueName: \"kubernetes.io/projected/08fd585e-061e-4e8a-976e-62bf6ea59f0d-kube-api-access-695lm\") pod \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\" (UID: \"08fd585e-061e-4e8a-976e-62bf6ea59f0d\") " Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.022436 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08fd585e-061e-4e8a-976e-62bf6ea59f0d-logs" (OuterVolumeSpecName: "logs") pod "08fd585e-061e-4e8a-976e-62bf6ea59f0d" (UID: "08fd585e-061e-4e8a-976e-62bf6ea59f0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.022585 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0746dec1-aec8-4f03-b098-83b54f42b016-logs\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.022614 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08fd585e-061e-4e8a-976e-62bf6ea59f0d-logs\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.022630 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0746dec1-aec8-4f03-b098-83b54f42b016-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.022644 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0746dec1-aec8-4f03-b098-83b54f42b016-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.022675 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grd6f\" (UniqueName: \"kubernetes.io/projected/0746dec1-aec8-4f03-b098-83b54f42b016-kube-api-access-grd6f\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.025041 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fd585e-061e-4e8a-976e-62bf6ea59f0d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "08fd585e-061e-4e8a-976e-62bf6ea59f0d" (UID: "08fd585e-061e-4e8a-976e-62bf6ea59f0d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.025802 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08fd585e-061e-4e8a-976e-62bf6ea59f0d-kube-api-access-695lm" (OuterVolumeSpecName: "kube-api-access-695lm") pod "08fd585e-061e-4e8a-976e-62bf6ea59f0d" (UID: "08fd585e-061e-4e8a-976e-62bf6ea59f0d"). InnerVolumeSpecName "kube-api-access-695lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.050479 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08fd585e-061e-4e8a-976e-62bf6ea59f0d-config-data" (OuterVolumeSpecName: "config-data") pod "08fd585e-061e-4e8a-976e-62bf6ea59f0d" (UID: "08fd585e-061e-4e8a-976e-62bf6ea59f0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.124291 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08fd585e-061e-4e8a-976e-62bf6ea59f0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.124322 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08fd585e-061e-4e8a-976e-62bf6ea59f0d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.124334 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-695lm\" (UniqueName: \"kubernetes.io/projected/08fd585e-061e-4e8a-976e-62bf6ea59f0d-kube-api-access-695lm\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.232125 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln"] Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.240334 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-56bf966488-qfwln"] Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.245112 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb"] Jan 03 03:34:03 crc kubenswrapper[4746]: I0103 03:34:03.249228 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-ngfdb"] Jan 03 03:34:04 crc kubenswrapper[4746]: I0103 03:34:04.237562 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" Jan 03 03:34:04 crc kubenswrapper[4746]: I0103 03:34:04.342538 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmq67\" (UniqueName: \"kubernetes.io/projected/9967785a-e58e-4417-bf8b-ed210f4ff865-kube-api-access-xmq67\") pod \"9967785a-e58e-4417-bf8b-ed210f4ff865\" (UID: \"9967785a-e58e-4417-bf8b-ed210f4ff865\") " Jan 03 03:34:04 crc kubenswrapper[4746]: I0103 03:34:04.342688 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9967785a-e58e-4417-bf8b-ed210f4ff865-operator-scripts\") pod \"9967785a-e58e-4417-bf8b-ed210f4ff865\" (UID: \"9967785a-e58e-4417-bf8b-ed210f4ff865\") " Jan 03 03:34:04 crc kubenswrapper[4746]: I0103 03:34:04.343440 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9967785a-e58e-4417-bf8b-ed210f4ff865-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9967785a-e58e-4417-bf8b-ed210f4ff865" (UID: "9967785a-e58e-4417-bf8b-ed210f4ff865"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:34:04 crc kubenswrapper[4746]: I0103 03:34:04.347109 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9967785a-e58e-4417-bf8b-ed210f4ff865-kube-api-access-xmq67" (OuterVolumeSpecName: "kube-api-access-xmq67") pod "9967785a-e58e-4417-bf8b-ed210f4ff865" (UID: "9967785a-e58e-4417-bf8b-ed210f4ff865"). InnerVolumeSpecName "kube-api-access-xmq67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:04 crc kubenswrapper[4746]: I0103 03:34:04.445073 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9967785a-e58e-4417-bf8b-ed210f4ff865-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:04 crc kubenswrapper[4746]: I0103 03:34:04.445124 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmq67\" (UniqueName: \"kubernetes.io/projected/9967785a-e58e-4417-bf8b-ed210f4ff865-kube-api-access-xmq67\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:04 crc kubenswrapper[4746]: I0103 03:34:04.474183 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0746dec1-aec8-4f03-b098-83b54f42b016" path="/var/lib/kubelet/pods/0746dec1-aec8-4f03-b098-83b54f42b016/volumes" Jan 03 03:34:04 crc kubenswrapper[4746]: I0103 03:34:04.474834 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08fd585e-061e-4e8a-976e-62bf6ea59f0d" path="/var/lib/kubelet/pods/08fd585e-061e-4e8a-976e-62bf6ea59f0d/volumes" Jan 03 03:34:04 crc kubenswrapper[4746]: I0103 03:34:04.933510 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" event={"ID":"9967785a-e58e-4417-bf8b-ed210f4ff865","Type":"ContainerDied","Data":"2d2e173ae485c48dcba823d57bac37be61ff66391cf5da694f073ae2fa7e96cf"} Jan 03 03:34:04 crc kubenswrapper[4746]: I0103 03:34:04.933566 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d2e173ae485c48dcba823d57bac37be61ff66391cf5da694f073ae2fa7e96cf" Jan 03 03:34:04 crc kubenswrapper[4746]: I0103 03:34:04.933614 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican418b-account-delete-2jnqf" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.366768 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-db-create-jhb4t"] Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.372974 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-db-create-jhb4t"] Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.379191 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican418b-account-delete-2jnqf"] Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.385015 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-418b-account-create-update-xljkl"] Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.391092 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican418b-account-delete-2jnqf"] Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.397310 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-418b-account-create-update-xljkl"] Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.476457 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9967785a-e58e-4417-bf8b-ed210f4ff865" path="/var/lib/kubelet/pods/9967785a-e58e-4417-bf8b-ed210f4ff865/volumes" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.479267 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad1b5c3-7f2a-4012-8f30-eec86173cce1" path="/var/lib/kubelet/pods/bad1b5c3-7f2a-4012-8f30-eec86173cce1/volumes" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.480073 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc637f0c-5764-44a6-8a51-52e17b52380d" path="/var/lib/kubelet/pods/fc637f0c-5764-44a6-8a51-52e17b52380d/volumes" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.550023 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-db-create-s4ff8"] Jan 03 03:34:06 crc kubenswrapper[4746]: E0103 03:34:06.550319 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0746dec1-aec8-4f03-b098-83b54f42b016" containerName="barbican-keystone-listener" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.550331 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0746dec1-aec8-4f03-b098-83b54f42b016" containerName="barbican-keystone-listener" Jan 03 03:34:06 crc kubenswrapper[4746]: E0103 03:34:06.550348 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fd585e-061e-4e8a-976e-62bf6ea59f0d" containerName="barbican-worker-log" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.550355 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fd585e-061e-4e8a-976e-62bf6ea59f0d" containerName="barbican-worker-log" Jan 03 03:34:06 crc kubenswrapper[4746]: E0103 03:34:06.550365 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9967785a-e58e-4417-bf8b-ed210f4ff865" containerName="mariadb-account-delete" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.550372 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9967785a-e58e-4417-bf8b-ed210f4ff865" containerName="mariadb-account-delete" Jan 03 03:34:06 crc kubenswrapper[4746]: E0103 03:34:06.550382 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fd585e-061e-4e8a-976e-62bf6ea59f0d" containerName="barbican-worker" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.550389 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fd585e-061e-4e8a-976e-62bf6ea59f0d" containerName="barbican-worker" Jan 03 03:34:06 crc kubenswrapper[4746]: E0103 03:34:06.550398 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0746dec1-aec8-4f03-b098-83b54f42b016" containerName="barbican-keystone-listener-log" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.550405 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0746dec1-aec8-4f03-b098-83b54f42b016" containerName="barbican-keystone-listener-log" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.550524 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9967785a-e58e-4417-bf8b-ed210f4ff865" containerName="mariadb-account-delete" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.550534 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0746dec1-aec8-4f03-b098-83b54f42b016" containerName="barbican-keystone-listener-log" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.550545 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0746dec1-aec8-4f03-b098-83b54f42b016" containerName="barbican-keystone-listener" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.550557 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fd585e-061e-4e8a-976e-62bf6ea59f0d" containerName="barbican-worker-log" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.550565 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fd585e-061e-4e8a-976e-62bf6ea59f0d" containerName="barbican-worker" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.551067 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-create-s4ff8" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.564827 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-db-create-s4ff8"] Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.590866 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j"] Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.591910 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.593610 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-db-secret" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.646846 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j"] Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.676890 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/384da4a3-d406-436c-9dac-bdde758d7784-operator-scripts\") pod \"barbican-db-create-s4ff8\" (UID: \"384da4a3-d406-436c-9dac-bdde758d7784\") " pod="barbican-kuttl-tests/barbican-db-create-s4ff8" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.677128 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-449vp\" (UniqueName: \"kubernetes.io/projected/384da4a3-d406-436c-9dac-bdde758d7784-kube-api-access-449vp\") pod \"barbican-db-create-s4ff8\" (UID: \"384da4a3-d406-436c-9dac-bdde758d7784\") " pod="barbican-kuttl-tests/barbican-db-create-s4ff8" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.677227 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2265f\" (UniqueName: \"kubernetes.io/projected/c8b68811-d0b9-4fb6-8737-c2176b27e460-kube-api-access-2265f\") pod \"barbican-f1fc-account-create-update-m5b5j\" (UID: \"c8b68811-d0b9-4fb6-8737-c2176b27e460\") " pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.677284 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b68811-d0b9-4fb6-8737-c2176b27e460-operator-scripts\") pod \"barbican-f1fc-account-create-update-m5b5j\" (UID: \"c8b68811-d0b9-4fb6-8737-c2176b27e460\") " pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.778555 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-449vp\" (UniqueName: \"kubernetes.io/projected/384da4a3-d406-436c-9dac-bdde758d7784-kube-api-access-449vp\") pod \"barbican-db-create-s4ff8\" (UID: \"384da4a3-d406-436c-9dac-bdde758d7784\") " pod="barbican-kuttl-tests/barbican-db-create-s4ff8" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.778698 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2265f\" (UniqueName: \"kubernetes.io/projected/c8b68811-d0b9-4fb6-8737-c2176b27e460-kube-api-access-2265f\") pod \"barbican-f1fc-account-create-update-m5b5j\" (UID: \"c8b68811-d0b9-4fb6-8737-c2176b27e460\") " pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.778751 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b68811-d0b9-4fb6-8737-c2176b27e460-operator-scripts\") pod \"barbican-f1fc-account-create-update-m5b5j\" (UID: \"c8b68811-d0b9-4fb6-8737-c2176b27e460\") " pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.778812 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/384da4a3-d406-436c-9dac-bdde758d7784-operator-scripts\") pod \"barbican-db-create-s4ff8\" (UID: \"384da4a3-d406-436c-9dac-bdde758d7784\") " pod="barbican-kuttl-tests/barbican-db-create-s4ff8" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.779810 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b68811-d0b9-4fb6-8737-c2176b27e460-operator-scripts\") pod \"barbican-f1fc-account-create-update-m5b5j\" (UID: \"c8b68811-d0b9-4fb6-8737-c2176b27e460\") " pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.780095 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/384da4a3-d406-436c-9dac-bdde758d7784-operator-scripts\") pod \"barbican-db-create-s4ff8\" (UID: \"384da4a3-d406-436c-9dac-bdde758d7784\") " pod="barbican-kuttl-tests/barbican-db-create-s4ff8" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.798812 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2265f\" (UniqueName: \"kubernetes.io/projected/c8b68811-d0b9-4fb6-8737-c2176b27e460-kube-api-access-2265f\") pod \"barbican-f1fc-account-create-update-m5b5j\" (UID: \"c8b68811-d0b9-4fb6-8737-c2176b27e460\") " pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.801240 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-449vp\" (UniqueName: \"kubernetes.io/projected/384da4a3-d406-436c-9dac-bdde758d7784-kube-api-access-449vp\") pod \"barbican-db-create-s4ff8\" (UID: \"384da4a3-d406-436c-9dac-bdde758d7784\") " pod="barbican-kuttl-tests/barbican-db-create-s4ff8" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.914949 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-create-s4ff8" Jan 03 03:34:06 crc kubenswrapper[4746]: I0103 03:34:06.924389 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" Jan 03 03:34:07 crc kubenswrapper[4746]: I0103 03:34:07.376784 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-db-create-s4ff8"] Jan 03 03:34:07 crc kubenswrapper[4746]: I0103 03:34:07.439621 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j"] Jan 03 03:34:07 crc kubenswrapper[4746]: W0103 03:34:07.445882 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8b68811_d0b9_4fb6_8737_c2176b27e460.slice/crio-5b0ef5520c28adf661e79ce4cd7876de746d0e4dcf16cee440e2e217d29b0474 WatchSource:0}: Error finding container 5b0ef5520c28adf661e79ce4cd7876de746d0e4dcf16cee440e2e217d29b0474: Status 404 returned error can't find the container with id 5b0ef5520c28adf661e79ce4cd7876de746d0e4dcf16cee440e2e217d29b0474 Jan 03 03:34:07 crc kubenswrapper[4746]: I0103 03:34:07.955600 4746 generic.go:334] "Generic (PLEG): container finished" podID="384da4a3-d406-436c-9dac-bdde758d7784" containerID="32ddbc8e2acdb5d7575bd47f1cd6d7ac981696f28d5d7a57547d88b32c62eea6" exitCode=0 Jan 03 03:34:07 crc kubenswrapper[4746]: I0103 03:34:07.955763 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-create-s4ff8" event={"ID":"384da4a3-d406-436c-9dac-bdde758d7784","Type":"ContainerDied","Data":"32ddbc8e2acdb5d7575bd47f1cd6d7ac981696f28d5d7a57547d88b32c62eea6"} Jan 03 03:34:07 crc kubenswrapper[4746]: I0103 03:34:07.956388 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-create-s4ff8" event={"ID":"384da4a3-d406-436c-9dac-bdde758d7784","Type":"ContainerStarted","Data":"a8b7caf1e1d7a32b3ab7ee3b58921bfb955b9538f6f4a07ae9ec0f631b994274"} Jan 03 03:34:07 crc kubenswrapper[4746]: I0103 03:34:07.957930 4746 generic.go:334] "Generic (PLEG): container finished" podID="c8b68811-d0b9-4fb6-8737-c2176b27e460" containerID="5674d57dd1cb733894c446f9964dc3b4916dda1e2d31609ae504990a94f15e87" exitCode=0 Jan 03 03:34:07 crc kubenswrapper[4746]: I0103 03:34:07.957985 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" event={"ID":"c8b68811-d0b9-4fb6-8737-c2176b27e460","Type":"ContainerDied","Data":"5674d57dd1cb733894c446f9964dc3b4916dda1e2d31609ae504990a94f15e87"} Jan 03 03:34:07 crc kubenswrapper[4746]: I0103 03:34:07.958029 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" event={"ID":"c8b68811-d0b9-4fb6-8737-c2176b27e460","Type":"ContainerStarted","Data":"5b0ef5520c28adf661e79ce4cd7876de746d0e4dcf16cee440e2e217d29b0474"} Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.301944 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.306694 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-create-s4ff8" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.418845 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2265f\" (UniqueName: \"kubernetes.io/projected/c8b68811-d0b9-4fb6-8737-c2176b27e460-kube-api-access-2265f\") pod \"c8b68811-d0b9-4fb6-8737-c2176b27e460\" (UID: \"c8b68811-d0b9-4fb6-8737-c2176b27e460\") " Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.418931 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b68811-d0b9-4fb6-8737-c2176b27e460-operator-scripts\") pod \"c8b68811-d0b9-4fb6-8737-c2176b27e460\" (UID: \"c8b68811-d0b9-4fb6-8737-c2176b27e460\") " Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.418967 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-449vp\" (UniqueName: \"kubernetes.io/projected/384da4a3-d406-436c-9dac-bdde758d7784-kube-api-access-449vp\") pod \"384da4a3-d406-436c-9dac-bdde758d7784\" (UID: \"384da4a3-d406-436c-9dac-bdde758d7784\") " Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.419017 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/384da4a3-d406-436c-9dac-bdde758d7784-operator-scripts\") pod \"384da4a3-d406-436c-9dac-bdde758d7784\" (UID: \"384da4a3-d406-436c-9dac-bdde758d7784\") " Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.419798 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8b68811-d0b9-4fb6-8737-c2176b27e460-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8b68811-d0b9-4fb6-8737-c2176b27e460" (UID: "c8b68811-d0b9-4fb6-8737-c2176b27e460"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.419952 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/384da4a3-d406-436c-9dac-bdde758d7784-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "384da4a3-d406-436c-9dac-bdde758d7784" (UID: "384da4a3-d406-436c-9dac-bdde758d7784"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.424075 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b68811-d0b9-4fb6-8737-c2176b27e460-kube-api-access-2265f" (OuterVolumeSpecName: "kube-api-access-2265f") pod "c8b68811-d0b9-4fb6-8737-c2176b27e460" (UID: "c8b68811-d0b9-4fb6-8737-c2176b27e460"). InnerVolumeSpecName "kube-api-access-2265f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.425221 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384da4a3-d406-436c-9dac-bdde758d7784-kube-api-access-449vp" (OuterVolumeSpecName: "kube-api-access-449vp") pod "384da4a3-d406-436c-9dac-bdde758d7784" (UID: "384da4a3-d406-436c-9dac-bdde758d7784"). InnerVolumeSpecName "kube-api-access-449vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.519800 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b68811-d0b9-4fb6-8737-c2176b27e460-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.520008 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-449vp\" (UniqueName: \"kubernetes.io/projected/384da4a3-d406-436c-9dac-bdde758d7784-kube-api-access-449vp\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.520022 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/384da4a3-d406-436c-9dac-bdde758d7784-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.520030 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2265f\" (UniqueName: \"kubernetes.io/projected/c8b68811-d0b9-4fb6-8737-c2176b27e460-kube-api-access-2265f\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.979431 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-create-s4ff8" event={"ID":"384da4a3-d406-436c-9dac-bdde758d7784","Type":"ContainerDied","Data":"a8b7caf1e1d7a32b3ab7ee3b58921bfb955b9538f6f4a07ae9ec0f631b994274"} Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.979462 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-create-s4ff8" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.979487 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b7caf1e1d7a32b3ab7ee3b58921bfb955b9538f6f4a07ae9ec0f631b994274" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.981631 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" event={"ID":"c8b68811-d0b9-4fb6-8737-c2176b27e460","Type":"ContainerDied","Data":"5b0ef5520c28adf661e79ce4cd7876de746d0e4dcf16cee440e2e217d29b0474"} Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.981686 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b0ef5520c28adf661e79ce4cd7876de746d0e4dcf16cee440e2e217d29b0474" Jan 03 03:34:09 crc kubenswrapper[4746]: I0103 03:34:09.981743 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j" Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.655219 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.932389 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-db-sync-mt8m8"] Jan 03 03:34:11 crc kubenswrapper[4746]: E0103 03:34:11.932699 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b68811-d0b9-4fb6-8737-c2176b27e460" containerName="mariadb-account-create-update" Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.932718 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b68811-d0b9-4fb6-8737-c2176b27e460" containerName="mariadb-account-create-update" Jan 03 03:34:11 crc kubenswrapper[4746]: E0103 03:34:11.932745 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384da4a3-d406-436c-9dac-bdde758d7784" containerName="mariadb-database-create" Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.932755 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="384da4a3-d406-436c-9dac-bdde758d7784" containerName="mariadb-database-create" Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.932920 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="384da4a3-d406-436c-9dac-bdde758d7784" containerName="mariadb-database-create" Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.932944 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b68811-d0b9-4fb6-8737-c2176b27e460" containerName="mariadb-account-create-update" Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.933460 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.935415 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"combined-ca-bundle" Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.935766 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-barbican-dockercfg-7pth7" Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.947147 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-db-sync-mt8m8"] Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.960253 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7a5e267e-20f6-45de-8307-1a6931ea30e6-db-sync-config-data\") pod \"barbican-db-sync-mt8m8\" (UID: \"7a5e267e-20f6-45de-8307-1a6931ea30e6\") " pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.960322 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc4fk\" (UniqueName: \"kubernetes.io/projected/7a5e267e-20f6-45de-8307-1a6931ea30e6-kube-api-access-qc4fk\") pod \"barbican-db-sync-mt8m8\" (UID: \"7a5e267e-20f6-45de-8307-1a6931ea30e6\") " pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:11 crc kubenswrapper[4746]: I0103 03:34:11.960349 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5e267e-20f6-45de-8307-1a6931ea30e6-combined-ca-bundle\") pod \"barbican-db-sync-mt8m8\" (UID: \"7a5e267e-20f6-45de-8307-1a6931ea30e6\") " pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:12 crc kubenswrapper[4746]: I0103 03:34:12.062172 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7a5e267e-20f6-45de-8307-1a6931ea30e6-db-sync-config-data\") pod \"barbican-db-sync-mt8m8\" (UID: \"7a5e267e-20f6-45de-8307-1a6931ea30e6\") " pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:12 crc kubenswrapper[4746]: I0103 03:34:12.062277 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc4fk\" (UniqueName: \"kubernetes.io/projected/7a5e267e-20f6-45de-8307-1a6931ea30e6-kube-api-access-qc4fk\") pod \"barbican-db-sync-mt8m8\" (UID: \"7a5e267e-20f6-45de-8307-1a6931ea30e6\") " pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:12 crc kubenswrapper[4746]: I0103 03:34:12.062327 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5e267e-20f6-45de-8307-1a6931ea30e6-combined-ca-bundle\") pod \"barbican-db-sync-mt8m8\" (UID: \"7a5e267e-20f6-45de-8307-1a6931ea30e6\") " pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:12 crc kubenswrapper[4746]: I0103 03:34:12.078184 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5e267e-20f6-45de-8307-1a6931ea30e6-combined-ca-bundle\") pod \"barbican-db-sync-mt8m8\" (UID: \"7a5e267e-20f6-45de-8307-1a6931ea30e6\") " pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:12 crc kubenswrapper[4746]: I0103 03:34:12.078533 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7a5e267e-20f6-45de-8307-1a6931ea30e6-db-sync-config-data\") pod \"barbican-db-sync-mt8m8\" (UID: \"7a5e267e-20f6-45de-8307-1a6931ea30e6\") " pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:12 crc kubenswrapper[4746]: I0103 03:34:12.090501 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc4fk\" (UniqueName: \"kubernetes.io/projected/7a5e267e-20f6-45de-8307-1a6931ea30e6-kube-api-access-qc4fk\") pod \"barbican-db-sync-mt8m8\" (UID: \"7a5e267e-20f6-45de-8307-1a6931ea30e6\") " pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:12 crc kubenswrapper[4746]: I0103 03:34:12.250833 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:12 crc kubenswrapper[4746]: I0103 03:34:12.796043 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-db-sync-mt8m8"] Jan 03 03:34:12 crc kubenswrapper[4746]: W0103 03:34:12.800929 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a5e267e_20f6_45de_8307_1a6931ea30e6.slice/crio-b38bc0fcb301546aebc172ea0adacf74edb96a0460d8c1b42bb94712964bb716 WatchSource:0}: Error finding container b38bc0fcb301546aebc172ea0adacf74edb96a0460d8c1b42bb94712964bb716: Status 404 returned error can't find the container with id b38bc0fcb301546aebc172ea0adacf74edb96a0460d8c1b42bb94712964bb716 Jan 03 03:34:13 crc kubenswrapper[4746]: I0103 03:34:13.003470 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" event={"ID":"7a5e267e-20f6-45de-8307-1a6931ea30e6","Type":"ContainerStarted","Data":"af22d6cc1e27a251921b7332fbb505a31ad4d599b4de6e07e1bc6f49f48c84a3"} Jan 03 03:34:13 crc kubenswrapper[4746]: I0103 03:34:13.003824 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" event={"ID":"7a5e267e-20f6-45de-8307-1a6931ea30e6","Type":"ContainerStarted","Data":"b38bc0fcb301546aebc172ea0adacf74edb96a0460d8c1b42bb94712964bb716"} Jan 03 03:34:13 crc kubenswrapper[4746]: I0103 03:34:13.018563 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" podStartSLOduration=2.018539582 podStartE2EDuration="2.018539582s" podCreationTimestamp="2026-01-03 03:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:34:13.015204121 +0000 UTC m=+1172.865094426" watchObservedRunningTime="2026-01-03 03:34:13.018539582 +0000 UTC m=+1172.868429887" Jan 03 03:34:15 crc kubenswrapper[4746]: I0103 03:34:15.017077 4746 generic.go:334] "Generic (PLEG): container finished" podID="7a5e267e-20f6-45de-8307-1a6931ea30e6" containerID="af22d6cc1e27a251921b7332fbb505a31ad4d599b4de6e07e1bc6f49f48c84a3" exitCode=0 Jan 03 03:34:15 crc kubenswrapper[4746]: I0103 03:34:15.017268 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" event={"ID":"7a5e267e-20f6-45de-8307-1a6931ea30e6","Type":"ContainerDied","Data":"af22d6cc1e27a251921b7332fbb505a31ad4d599b4de6e07e1bc6f49f48c84a3"} Jan 03 03:34:16 crc kubenswrapper[4746]: I0103 03:34:16.342049 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:16 crc kubenswrapper[4746]: I0103 03:34:16.453973 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5e267e-20f6-45de-8307-1a6931ea30e6-combined-ca-bundle\") pod \"7a5e267e-20f6-45de-8307-1a6931ea30e6\" (UID: \"7a5e267e-20f6-45de-8307-1a6931ea30e6\") " Jan 03 03:34:16 crc kubenswrapper[4746]: I0103 03:34:16.455185 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7a5e267e-20f6-45de-8307-1a6931ea30e6-db-sync-config-data\") pod \"7a5e267e-20f6-45de-8307-1a6931ea30e6\" (UID: \"7a5e267e-20f6-45de-8307-1a6931ea30e6\") " Jan 03 03:34:16 crc kubenswrapper[4746]: I0103 03:34:16.455295 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc4fk\" (UniqueName: \"kubernetes.io/projected/7a5e267e-20f6-45de-8307-1a6931ea30e6-kube-api-access-qc4fk\") pod \"7a5e267e-20f6-45de-8307-1a6931ea30e6\" (UID: \"7a5e267e-20f6-45de-8307-1a6931ea30e6\") " Jan 03 03:34:16 crc kubenswrapper[4746]: I0103 03:34:16.469874 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5e267e-20f6-45de-8307-1a6931ea30e6-kube-api-access-qc4fk" (OuterVolumeSpecName: "kube-api-access-qc4fk") pod "7a5e267e-20f6-45de-8307-1a6931ea30e6" (UID: "7a5e267e-20f6-45de-8307-1a6931ea30e6"). InnerVolumeSpecName "kube-api-access-qc4fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:16 crc kubenswrapper[4746]: I0103 03:34:16.477996 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5e267e-20f6-45de-8307-1a6931ea30e6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7a5e267e-20f6-45de-8307-1a6931ea30e6" (UID: "7a5e267e-20f6-45de-8307-1a6931ea30e6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:16 crc kubenswrapper[4746]: I0103 03:34:16.484872 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5e267e-20f6-45de-8307-1a6931ea30e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a5e267e-20f6-45de-8307-1a6931ea30e6" (UID: "7a5e267e-20f6-45de-8307-1a6931ea30e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:16 crc kubenswrapper[4746]: I0103 03:34:16.557144 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5e267e-20f6-45de-8307-1a6931ea30e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:16 crc kubenswrapper[4746]: I0103 03:34:16.557173 4746 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7a5e267e-20f6-45de-8307-1a6931ea30e6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:16 crc kubenswrapper[4746]: I0103 03:34:16.557185 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc4fk\" (UniqueName: \"kubernetes.io/projected/7a5e267e-20f6-45de-8307-1a6931ea30e6-kube-api-access-qc4fk\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.031122 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" event={"ID":"7a5e267e-20f6-45de-8307-1a6931ea30e6","Type":"ContainerDied","Data":"b38bc0fcb301546aebc172ea0adacf74edb96a0460d8c1b42bb94712964bb716"} Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.031164 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b38bc0fcb301546aebc172ea0adacf74edb96a0460d8c1b42bb94712964bb716" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.031207 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-db-sync-mt8m8" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.259792 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-worker-649968f979-6ntm6"] Jan 03 03:34:17 crc kubenswrapper[4746]: E0103 03:34:17.260150 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5e267e-20f6-45de-8307-1a6931ea30e6" containerName="barbican-db-sync" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.260170 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5e267e-20f6-45de-8307-1a6931ea30e6" containerName="barbican-db-sync" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.260333 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5e267e-20f6-45de-8307-1a6931ea30e6" containerName="barbican-db-sync" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.261185 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.264369 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"combined-ca-bundle" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.265165 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-barbican-dockercfg-7pth7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.268422 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-worker-649968f979-6ntm6"] Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.284932 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7"] Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.286123 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.318076 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7"] Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.366369 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx499\" (UniqueName: \"kubernetes.io/projected/facce79f-e3fb-4862-8e13-4d77b81d2205-kube-api-access-gx499\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.366440 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-config-data-custom\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.366466 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-combined-ca-bundle\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.366482 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-config-data\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.366503 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-config-data\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.366533 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-config-data-custom\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.366559 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds6hl\" (UniqueName: \"kubernetes.io/projected/ab76b5d0-1f31-473c-b225-5e7a79fd8416-kube-api-access-ds6hl\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.366590 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab76b5d0-1f31-473c-b225-5e7a79fd8416-logs\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.366612 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-combined-ca-bundle\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.366633 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/facce79f-e3fb-4862-8e13-4d77b81d2205-logs\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.370606 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v"] Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.373144 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.376033 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"barbican-api-config-data" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.376033 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"cert-barbican-internal-svc" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.378472 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"cert-barbican-public-svc" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.416838 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v"] Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468021 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-config-data-custom\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468110 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb44q\" (UniqueName: \"kubernetes.io/projected/6122b17a-dabe-4c54-8631-f377d5e4b576-kube-api-access-bb44q\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468139 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds6hl\" (UniqueName: \"kubernetes.io/projected/ab76b5d0-1f31-473c-b225-5e7a79fd8416-kube-api-access-ds6hl\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468186 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab76b5d0-1f31-473c-b225-5e7a79fd8416-logs\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468221 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-config-data\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468244 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-combined-ca-bundle\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468275 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/facce79f-e3fb-4862-8e13-4d77b81d2205-logs\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468304 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx499\" (UniqueName: \"kubernetes.io/projected/facce79f-e3fb-4862-8e13-4d77b81d2205-kube-api-access-gx499\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468331 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-config-data-custom\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468375 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-config-data-custom\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468399 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-combined-ca-bundle\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468422 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-config-data\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468447 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-public-tls-certs\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468472 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-config-data\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468502 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-combined-ca-bundle\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468529 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-internal-tls-certs\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.468554 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6122b17a-dabe-4c54-8631-f377d5e4b576-logs\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.469393 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab76b5d0-1f31-473c-b225-5e7a79fd8416-logs\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.471271 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/facce79f-e3fb-4862-8e13-4d77b81d2205-logs\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.474344 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-config-data-custom\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.474788 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-config-data\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.475230 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-combined-ca-bundle\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.475609 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-combined-ca-bundle\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.486486 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-config-data-custom\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.486744 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds6hl\" (UniqueName: \"kubernetes.io/projected/ab76b5d0-1f31-473c-b225-5e7a79fd8416-kube-api-access-ds6hl\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.491347 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx499\" (UniqueName: \"kubernetes.io/projected/facce79f-e3fb-4862-8e13-4d77b81d2205-kube-api-access-gx499\") pod \"barbican-keystone-listener-79f4d74dc4-2qwg7\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.496921 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-config-data\") pod \"barbican-worker-649968f979-6ntm6\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.570497 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb44q\" (UniqueName: \"kubernetes.io/projected/6122b17a-dabe-4c54-8631-f377d5e4b576-kube-api-access-bb44q\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.570572 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-config-data\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.570616 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-config-data-custom\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.570670 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-public-tls-certs\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.570699 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-combined-ca-bundle\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.570716 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-internal-tls-certs\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.570738 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6122b17a-dabe-4c54-8631-f377d5e4b576-logs\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.571178 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6122b17a-dabe-4c54-8631-f377d5e4b576-logs\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.574138 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-public-tls-certs\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.574311 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-config-data-custom\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.574755 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-combined-ca-bundle\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.575316 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-internal-tls-certs\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.575583 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-config-data\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.575927 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.591286 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb44q\" (UniqueName: \"kubernetes.io/projected/6122b17a-dabe-4c54-8631-f377d5e4b576-kube-api-access-bb44q\") pod \"barbican-api-5444c5fcdb-ztr6v\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.605443 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.730342 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:17 crc kubenswrapper[4746]: I0103 03:34:17.959160 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7"] Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.040116 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" event={"ID":"facce79f-e3fb-4862-8e13-4d77b81d2205","Type":"ContainerStarted","Data":"93797496535e5e470ba9bacb854a84d96151949aabba7afbbf017c4c2808874c"} Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.069517 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-worker-649968f979-6ntm6"] Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.267692 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v"] Jan 03 03:34:18 crc kubenswrapper[4746]: W0103 03:34:18.272069 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6122b17a_dabe_4c54_8631_f377d5e4b576.slice/crio-1675818d9cd67f2b82130b08e516fe282c500eab2ee0b37269bb8d3d83f59100 WatchSource:0}: Error finding container 1675818d9cd67f2b82130b08e516fe282c500eab2ee0b37269bb8d3d83f59100: Status 404 returned error can't find the container with id 1675818d9cd67f2b82130b08e516fe282c500eab2ee0b37269bb8d3d83f59100 Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.445755 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-db-sync-mt8m8"] Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.459386 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-db-sync-mt8m8"] Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.475484 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5e267e-20f6-45de-8307-1a6931ea30e6" path="/var/lib/kubelet/pods/7a5e267e-20f6-45de-8307-1a6931ea30e6/volumes" Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.505191 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-worker-649968f979-6ntm6"] Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.524711 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b"] Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.525823 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.537693 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v"] Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.552879 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7"] Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.576162 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b"] Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.582862 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2aed8c8d-5213-480d-b2ab-4306e4e14e3b-operator-scripts\") pod \"barbicanf1fc-account-delete-pds4b\" (UID: \"2aed8c8d-5213-480d-b2ab-4306e4e14e3b\") " pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.583136 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6zqq\" (UniqueName: \"kubernetes.io/projected/2aed8c8d-5213-480d-b2ab-4306e4e14e3b-kube-api-access-b6zqq\") pod \"barbicanf1fc-account-delete-pds4b\" (UID: \"2aed8c8d-5213-480d-b2ab-4306e4e14e3b\") " pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.684384 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2aed8c8d-5213-480d-b2ab-4306e4e14e3b-operator-scripts\") pod \"barbicanf1fc-account-delete-pds4b\" (UID: \"2aed8c8d-5213-480d-b2ab-4306e4e14e3b\") " pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.684679 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6zqq\" (UniqueName: \"kubernetes.io/projected/2aed8c8d-5213-480d-b2ab-4306e4e14e3b-kube-api-access-b6zqq\") pod \"barbicanf1fc-account-delete-pds4b\" (UID: \"2aed8c8d-5213-480d-b2ab-4306e4e14e3b\") " pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.685314 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2aed8c8d-5213-480d-b2ab-4306e4e14e3b-operator-scripts\") pod \"barbicanf1fc-account-delete-pds4b\" (UID: \"2aed8c8d-5213-480d-b2ab-4306e4e14e3b\") " pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.710360 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6zqq\" (UniqueName: \"kubernetes.io/projected/2aed8c8d-5213-480d-b2ab-4306e4e14e3b-kube-api-access-b6zqq\") pod \"barbicanf1fc-account-delete-pds4b\" (UID: \"2aed8c8d-5213-480d-b2ab-4306e4e14e3b\") " pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" Jan 03 03:34:18 crc kubenswrapper[4746]: I0103 03:34:18.905232 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.048072 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" event={"ID":"6122b17a-dabe-4c54-8631-f377d5e4b576","Type":"ContainerStarted","Data":"8499be4a9306487bbb72dc8035732ee672ba052232114b3961fbf8f93045a599"} Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.048420 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" event={"ID":"6122b17a-dabe-4c54-8631-f377d5e4b576","Type":"ContainerStarted","Data":"04c820c210f76dab3ef8f810caffc18b329e726662ff2507586342ee33f88def"} Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.048434 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" event={"ID":"6122b17a-dabe-4c54-8631-f377d5e4b576","Type":"ContainerStarted","Data":"1675818d9cd67f2b82130b08e516fe282c500eab2ee0b37269bb8d3d83f59100"} Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.048579 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" podUID="6122b17a-dabe-4c54-8631-f377d5e4b576" containerName="barbican-api-log" containerID="cri-o://04c820c210f76dab3ef8f810caffc18b329e726662ff2507586342ee33f88def" gracePeriod=30 Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.048907 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.048955 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.049232 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" podUID="6122b17a-dabe-4c54-8631-f377d5e4b576" containerName="barbican-api" containerID="cri-o://8499be4a9306487bbb72dc8035732ee672ba052232114b3961fbf8f93045a599" gracePeriod=30 Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.052072 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" event={"ID":"ab76b5d0-1f31-473c-b225-5e7a79fd8416","Type":"ContainerStarted","Data":"4500195ac59180729b0eab2f18b54f9287036600fb7dc514444b6cf72c310476"} Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.052176 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" event={"ID":"ab76b5d0-1f31-473c-b225-5e7a79fd8416","Type":"ContainerStarted","Data":"0dc541d246ab1f5eb9f70e229548c4e830fb637c2b22f8a53235d4c7c296c08b"} Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.052255 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" event={"ID":"ab76b5d0-1f31-473c-b225-5e7a79fd8416","Type":"ContainerStarted","Data":"068296ad03bc1b276c719ac15fd84e2f96b27896089d54dc4a3fa157b768906f"} Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.052467 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" podUID="ab76b5d0-1f31-473c-b225-5e7a79fd8416" containerName="barbican-worker-log" containerID="cri-o://0dc541d246ab1f5eb9f70e229548c4e830fb637c2b22f8a53235d4c7c296c08b" gracePeriod=30 Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.052599 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" podUID="ab76b5d0-1f31-473c-b225-5e7a79fd8416" containerName="barbican-worker" containerID="cri-o://4500195ac59180729b0eab2f18b54f9287036600fb7dc514444b6cf72c310476" gracePeriod=30 Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.057782 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" event={"ID":"facce79f-e3fb-4862-8e13-4d77b81d2205","Type":"ContainerStarted","Data":"d605a9b27512dec4d2fd813f4205f6daf28f8e916c1870e230e71184f0f54d04"} Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.058473 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" event={"ID":"facce79f-e3fb-4862-8e13-4d77b81d2205","Type":"ContainerStarted","Data":"5daa0209abd0ce8e89476e44f935cb47979dc05789c51323d2f6e47435b26208"} Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.058422 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" podUID="facce79f-e3fb-4862-8e13-4d77b81d2205" containerName="barbican-keystone-listener" containerID="cri-o://d605a9b27512dec4d2fd813f4205f6daf28f8e916c1870e230e71184f0f54d04" gracePeriod=30 Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.058401 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" podUID="facce79f-e3fb-4862-8e13-4d77b81d2205" containerName="barbican-keystone-listener-log" containerID="cri-o://5daa0209abd0ce8e89476e44f935cb47979dc05789c51323d2f6e47435b26208" gracePeriod=30 Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.083377 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" podStartSLOduration=2.083355616 podStartE2EDuration="2.083355616s" podCreationTimestamp="2026-01-03 03:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:34:19.080145377 +0000 UTC m=+1178.930035682" watchObservedRunningTime="2026-01-03 03:34:19.083355616 +0000 UTC m=+1178.933245921" Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.107756 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" podStartSLOduration=2.107732873 podStartE2EDuration="2.107732873s" podCreationTimestamp="2026-01-03 03:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:34:19.100957347 +0000 UTC m=+1178.950847672" watchObservedRunningTime="2026-01-03 03:34:19.107732873 +0000 UTC m=+1178.957623188" Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.142312 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" podStartSLOduration=2.142276239 podStartE2EDuration="2.142276239s" podCreationTimestamp="2026-01-03 03:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:34:19.136398515 +0000 UTC m=+1178.986288820" watchObservedRunningTime="2026-01-03 03:34:19.142276239 +0000 UTC m=+1178.992166554" Jan 03 03:34:19 crc kubenswrapper[4746]: I0103 03:34:19.568519 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b"] Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.077065 4746 generic.go:334] "Generic (PLEG): container finished" podID="ab76b5d0-1f31-473c-b225-5e7a79fd8416" containerID="0dc541d246ab1f5eb9f70e229548c4e830fb637c2b22f8a53235d4c7c296c08b" exitCode=143 Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.077149 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" event={"ID":"ab76b5d0-1f31-473c-b225-5e7a79fd8416","Type":"ContainerDied","Data":"0dc541d246ab1f5eb9f70e229548c4e830fb637c2b22f8a53235d4c7c296c08b"} Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.079030 4746 generic.go:334] "Generic (PLEG): container finished" podID="facce79f-e3fb-4862-8e13-4d77b81d2205" containerID="5daa0209abd0ce8e89476e44f935cb47979dc05789c51323d2f6e47435b26208" exitCode=143 Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.079091 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" event={"ID":"facce79f-e3fb-4862-8e13-4d77b81d2205","Type":"ContainerDied","Data":"5daa0209abd0ce8e89476e44f935cb47979dc05789c51323d2f6e47435b26208"} Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.080556 4746 generic.go:334] "Generic (PLEG): container finished" podID="6122b17a-dabe-4c54-8631-f377d5e4b576" containerID="8499be4a9306487bbb72dc8035732ee672ba052232114b3961fbf8f93045a599" exitCode=0 Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.080584 4746 generic.go:334] "Generic (PLEG): container finished" podID="6122b17a-dabe-4c54-8631-f377d5e4b576" containerID="04c820c210f76dab3ef8f810caffc18b329e726662ff2507586342ee33f88def" exitCode=143 Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.080615 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" event={"ID":"6122b17a-dabe-4c54-8631-f377d5e4b576","Type":"ContainerDied","Data":"8499be4a9306487bbb72dc8035732ee672ba052232114b3961fbf8f93045a599"} Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.080645 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" event={"ID":"6122b17a-dabe-4c54-8631-f377d5e4b576","Type":"ContainerDied","Data":"04c820c210f76dab3ef8f810caffc18b329e726662ff2507586342ee33f88def"} Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.081496 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" event={"ID":"2aed8c8d-5213-480d-b2ab-4306e4e14e3b","Type":"ContainerStarted","Data":"68259df3c295064ee616925ba980a04f7dd6eda4d3fbc9a55a6554616637f6d6"} Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.805641 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.927787 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb44q\" (UniqueName: \"kubernetes.io/projected/6122b17a-dabe-4c54-8631-f377d5e4b576-kube-api-access-bb44q\") pod \"6122b17a-dabe-4c54-8631-f377d5e4b576\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.927866 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-internal-tls-certs\") pod \"6122b17a-dabe-4c54-8631-f377d5e4b576\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.927915 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-combined-ca-bundle\") pod \"6122b17a-dabe-4c54-8631-f377d5e4b576\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.928045 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6122b17a-dabe-4c54-8631-f377d5e4b576-logs\") pod \"6122b17a-dabe-4c54-8631-f377d5e4b576\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.928075 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-public-tls-certs\") pod \"6122b17a-dabe-4c54-8631-f377d5e4b576\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.928118 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-config-data\") pod \"6122b17a-dabe-4c54-8631-f377d5e4b576\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.928160 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-config-data-custom\") pod \"6122b17a-dabe-4c54-8631-f377d5e4b576\" (UID: \"6122b17a-dabe-4c54-8631-f377d5e4b576\") " Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.928617 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6122b17a-dabe-4c54-8631-f377d5e4b576-logs" (OuterVolumeSpecName: "logs") pod "6122b17a-dabe-4c54-8631-f377d5e4b576" (UID: "6122b17a-dabe-4c54-8631-f377d5e4b576"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.933858 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6122b17a-dabe-4c54-8631-f377d5e4b576" (UID: "6122b17a-dabe-4c54-8631-f377d5e4b576"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.933897 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6122b17a-dabe-4c54-8631-f377d5e4b576-kube-api-access-bb44q" (OuterVolumeSpecName: "kube-api-access-bb44q") pod "6122b17a-dabe-4c54-8631-f377d5e4b576" (UID: "6122b17a-dabe-4c54-8631-f377d5e4b576"). InnerVolumeSpecName "kube-api-access-bb44q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.950739 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6122b17a-dabe-4c54-8631-f377d5e4b576" (UID: "6122b17a-dabe-4c54-8631-f377d5e4b576"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.966108 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6122b17a-dabe-4c54-8631-f377d5e4b576" (UID: "6122b17a-dabe-4c54-8631-f377d5e4b576"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.967028 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-config-data" (OuterVolumeSpecName: "config-data") pod "6122b17a-dabe-4c54-8631-f377d5e4b576" (UID: "6122b17a-dabe-4c54-8631-f377d5e4b576"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:20 crc kubenswrapper[4746]: I0103 03:34:20.976925 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6122b17a-dabe-4c54-8631-f377d5e4b576" (UID: "6122b17a-dabe-4c54-8631-f377d5e4b576"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.033703 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6122b17a-dabe-4c54-8631-f377d5e4b576-logs\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.033745 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.033757 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.033766 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.033776 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb44q\" (UniqueName: \"kubernetes.io/projected/6122b17a-dabe-4c54-8631-f377d5e4b576-kube-api-access-bb44q\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.033785 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.033793 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6122b17a-dabe-4c54-8631-f377d5e4b576-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.089195 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" event={"ID":"6122b17a-dabe-4c54-8631-f377d5e4b576","Type":"ContainerDied","Data":"1675818d9cd67f2b82130b08e516fe282c500eab2ee0b37269bb8d3d83f59100"} Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.089245 4746 scope.go:117] "RemoveContainer" containerID="8499be4a9306487bbb72dc8035732ee672ba052232114b3961fbf8f93045a599" Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.089351 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v" Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.092487 4746 generic.go:334] "Generic (PLEG): container finished" podID="2aed8c8d-5213-480d-b2ab-4306e4e14e3b" containerID="568c2550e150795b1ad319b218a514908e2ac8ff243666b375318bfbb1388104" exitCode=0 Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.092533 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" event={"ID":"2aed8c8d-5213-480d-b2ab-4306e4e14e3b","Type":"ContainerDied","Data":"568c2550e150795b1ad319b218a514908e2ac8ff243666b375318bfbb1388104"} Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.117647 4746 scope.go:117] "RemoveContainer" containerID="04c820c210f76dab3ef8f810caffc18b329e726662ff2507586342ee33f88def" Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.125597 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v"] Jan 03 03:34:21 crc kubenswrapper[4746]: I0103 03:34:21.130592 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-api-5444c5fcdb-ztr6v"] Jan 03 03:34:22 crc kubenswrapper[4746]: I0103 03:34:22.436706 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" Jan 03 03:34:22 crc kubenswrapper[4746]: I0103 03:34:22.472610 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6122b17a-dabe-4c54-8631-f377d5e4b576" path="/var/lib/kubelet/pods/6122b17a-dabe-4c54-8631-f377d5e4b576/volumes" Jan 03 03:34:22 crc kubenswrapper[4746]: I0103 03:34:22.556594 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2aed8c8d-5213-480d-b2ab-4306e4e14e3b-operator-scripts\") pod \"2aed8c8d-5213-480d-b2ab-4306e4e14e3b\" (UID: \"2aed8c8d-5213-480d-b2ab-4306e4e14e3b\") " Jan 03 03:34:22 crc kubenswrapper[4746]: I0103 03:34:22.556761 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6zqq\" (UniqueName: \"kubernetes.io/projected/2aed8c8d-5213-480d-b2ab-4306e4e14e3b-kube-api-access-b6zqq\") pod \"2aed8c8d-5213-480d-b2ab-4306e4e14e3b\" (UID: \"2aed8c8d-5213-480d-b2ab-4306e4e14e3b\") " Jan 03 03:34:22 crc kubenswrapper[4746]: I0103 03:34:22.557367 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aed8c8d-5213-480d-b2ab-4306e4e14e3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2aed8c8d-5213-480d-b2ab-4306e4e14e3b" (UID: "2aed8c8d-5213-480d-b2ab-4306e4e14e3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:34:22 crc kubenswrapper[4746]: I0103 03:34:22.560889 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aed8c8d-5213-480d-b2ab-4306e4e14e3b-kube-api-access-b6zqq" (OuterVolumeSpecName: "kube-api-access-b6zqq") pod "2aed8c8d-5213-480d-b2ab-4306e4e14e3b" (UID: "2aed8c8d-5213-480d-b2ab-4306e4e14e3b"). InnerVolumeSpecName "kube-api-access-b6zqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:22 crc kubenswrapper[4746]: I0103 03:34:22.658031 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2aed8c8d-5213-480d-b2ab-4306e4e14e3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:22 crc kubenswrapper[4746]: I0103 03:34:22.658057 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6zqq\" (UniqueName: \"kubernetes.io/projected/2aed8c8d-5213-480d-b2ab-4306e4e14e3b-kube-api-access-b6zqq\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:23 crc kubenswrapper[4746]: I0103 03:34:23.113674 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" event={"ID":"2aed8c8d-5213-480d-b2ab-4306e4e14e3b","Type":"ContainerDied","Data":"68259df3c295064ee616925ba980a04f7dd6eda4d3fbc9a55a6554616637f6d6"} Jan 03 03:34:23 crc kubenswrapper[4746]: I0103 03:34:23.113714 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68259df3c295064ee616925ba980a04f7dd6eda4d3fbc9a55a6554616637f6d6" Jan 03 03:34:23 crc kubenswrapper[4746]: I0103 03:34:23.113802 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b" Jan 03 03:34:23 crc kubenswrapper[4746]: I0103 03:34:23.551872 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-db-create-s4ff8"] Jan 03 03:34:23 crc kubenswrapper[4746]: I0103 03:34:23.566514 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-db-create-s4ff8"] Jan 03 03:34:23 crc kubenswrapper[4746]: I0103 03:34:23.577291 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j"] Jan 03 03:34:23 crc kubenswrapper[4746]: I0103 03:34:23.584275 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b"] Jan 03 03:34:23 crc kubenswrapper[4746]: I0103 03:34:23.591062 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbicanf1fc-account-delete-pds4b"] Jan 03 03:34:23 crc kubenswrapper[4746]: I0103 03:34:23.599057 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-f1fc-account-create-update-m5b5j"] Jan 03 03:34:24 crc kubenswrapper[4746]: I0103 03:34:24.479783 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aed8c8d-5213-480d-b2ab-4306e4e14e3b" path="/var/lib/kubelet/pods/2aed8c8d-5213-480d-b2ab-4306e4e14e3b/volumes" Jan 03 03:34:24 crc kubenswrapper[4746]: I0103 03:34:24.481101 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="384da4a3-d406-436c-9dac-bdde758d7784" path="/var/lib/kubelet/pods/384da4a3-d406-436c-9dac-bdde758d7784/volumes" Jan 03 03:34:24 crc kubenswrapper[4746]: I0103 03:34:24.482106 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b68811-d0b9-4fb6-8737-c2176b27e460" path="/var/lib/kubelet/pods/c8b68811-d0b9-4fb6-8737-c2176b27e460/volumes" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.170324 4746 generic.go:334] "Generic (PLEG): container finished" podID="68b5ec47-2fd9-4db7-97da-e216d788f047" containerID="4f24c99ecebafecd1daa2fa676468242ce1697973c08c0761bb0bb7ca70c244a" exitCode=137 Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.170551 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" event={"ID":"68b5ec47-2fd9-4db7-97da-e216d788f047","Type":"ContainerDied","Data":"4f24c99ecebafecd1daa2fa676468242ce1697973c08c0761bb0bb7ca70c244a"} Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.174128 4746 generic.go:334] "Generic (PLEG): container finished" podID="3f26b869-52a1-48c3-9c08-ab841aa265ed" containerID="ffa86cf3a73df6b34f8d86eb97c27d43d6e75b6b7f3ea19a7af5a3b88afd6d9f" exitCode=137 Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.174165 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" event={"ID":"3f26b869-52a1-48c3-9c08-ab841aa265ed","Type":"ContainerDied","Data":"ffa86cf3a73df6b34f8d86eb97c27d43d6e75b6b7f3ea19a7af5a3b88afd6d9f"} Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.174194 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" event={"ID":"3f26b869-52a1-48c3-9c08-ab841aa265ed","Type":"ContainerDied","Data":"4585c12a380098adebb5572273def231496956df4faced2dffab29b7e0860731"} Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.174204 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4585c12a380098adebb5572273def231496956df4faced2dffab29b7e0860731" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.202924 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.206025 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.382158 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68b5ec47-2fd9-4db7-97da-e216d788f047-config-data-custom\") pod \"68b5ec47-2fd9-4db7-97da-e216d788f047\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.382273 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f26b869-52a1-48c3-9c08-ab841aa265ed-config-data\") pod \"3f26b869-52a1-48c3-9c08-ab841aa265ed\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.382314 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f26b869-52a1-48c3-9c08-ab841aa265ed-config-data-custom\") pod \"3f26b869-52a1-48c3-9c08-ab841aa265ed\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.382365 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbm27\" (UniqueName: \"kubernetes.io/projected/68b5ec47-2fd9-4db7-97da-e216d788f047-kube-api-access-nbm27\") pod \"68b5ec47-2fd9-4db7-97da-e216d788f047\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.382414 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts8r2\" (UniqueName: \"kubernetes.io/projected/3f26b869-52a1-48c3-9c08-ab841aa265ed-kube-api-access-ts8r2\") pod \"3f26b869-52a1-48c3-9c08-ab841aa265ed\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.382484 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b5ec47-2fd9-4db7-97da-e216d788f047-config-data\") pod \"68b5ec47-2fd9-4db7-97da-e216d788f047\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.382525 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f26b869-52a1-48c3-9c08-ab841aa265ed-logs\") pod \"3f26b869-52a1-48c3-9c08-ab841aa265ed\" (UID: \"3f26b869-52a1-48c3-9c08-ab841aa265ed\") " Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.382606 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68b5ec47-2fd9-4db7-97da-e216d788f047-logs\") pod \"68b5ec47-2fd9-4db7-97da-e216d788f047\" (UID: \"68b5ec47-2fd9-4db7-97da-e216d788f047\") " Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.383944 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b5ec47-2fd9-4db7-97da-e216d788f047-logs" (OuterVolumeSpecName: "logs") pod "68b5ec47-2fd9-4db7-97da-e216d788f047" (UID: "68b5ec47-2fd9-4db7-97da-e216d788f047"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.384074 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f26b869-52a1-48c3-9c08-ab841aa265ed-logs" (OuterVolumeSpecName: "logs") pod "3f26b869-52a1-48c3-9c08-ab841aa265ed" (UID: "3f26b869-52a1-48c3-9c08-ab841aa265ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.387734 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b5ec47-2fd9-4db7-97da-e216d788f047-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "68b5ec47-2fd9-4db7-97da-e216d788f047" (UID: "68b5ec47-2fd9-4db7-97da-e216d788f047"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.387765 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f26b869-52a1-48c3-9c08-ab841aa265ed-kube-api-access-ts8r2" (OuterVolumeSpecName: "kube-api-access-ts8r2") pod "3f26b869-52a1-48c3-9c08-ab841aa265ed" (UID: "3f26b869-52a1-48c3-9c08-ab841aa265ed"). InnerVolumeSpecName "kube-api-access-ts8r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.390820 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f26b869-52a1-48c3-9c08-ab841aa265ed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3f26b869-52a1-48c3-9c08-ab841aa265ed" (UID: "3f26b869-52a1-48c3-9c08-ab841aa265ed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.391601 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b5ec47-2fd9-4db7-97da-e216d788f047-kube-api-access-nbm27" (OuterVolumeSpecName: "kube-api-access-nbm27") pod "68b5ec47-2fd9-4db7-97da-e216d788f047" (UID: "68b5ec47-2fd9-4db7-97da-e216d788f047"). InnerVolumeSpecName "kube-api-access-nbm27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.413770 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f26b869-52a1-48c3-9c08-ab841aa265ed-config-data" (OuterVolumeSpecName: "config-data") pod "3f26b869-52a1-48c3-9c08-ab841aa265ed" (UID: "3f26b869-52a1-48c3-9c08-ab841aa265ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.418011 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b5ec47-2fd9-4db7-97da-e216d788f047-config-data" (OuterVolumeSpecName: "config-data") pod "68b5ec47-2fd9-4db7-97da-e216d788f047" (UID: "68b5ec47-2fd9-4db7-97da-e216d788f047"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.484625 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f26b869-52a1-48c3-9c08-ab841aa265ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.484725 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f26b869-52a1-48c3-9c08-ab841aa265ed-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.484756 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbm27\" (UniqueName: \"kubernetes.io/projected/68b5ec47-2fd9-4db7-97da-e216d788f047-kube-api-access-nbm27\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.484782 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts8r2\" (UniqueName: \"kubernetes.io/projected/3f26b869-52a1-48c3-9c08-ab841aa265ed-kube-api-access-ts8r2\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.484806 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68b5ec47-2fd9-4db7-97da-e216d788f047-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.484827 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f26b869-52a1-48c3-9c08-ab841aa265ed-logs\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.484849 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68b5ec47-2fd9-4db7-97da-e216d788f047-logs\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:30 crc kubenswrapper[4746]: I0103 03:34:30.484873 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68b5ec47-2fd9-4db7-97da-e216d788f047-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:31 crc kubenswrapper[4746]: I0103 03:34:31.182201 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" event={"ID":"68b5ec47-2fd9-4db7-97da-e216d788f047","Type":"ContainerDied","Data":"59b6e7a6d83a46d32926b27ffb7d093e52fedfcb72578e28ba5ac4b8b9221331"} Jan 03 03:34:31 crc kubenswrapper[4746]: I0103 03:34:31.182229 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4" Jan 03 03:34:31 crc kubenswrapper[4746]: I0103 03:34:31.182547 4746 scope.go:117] "RemoveContainer" containerID="4f24c99ecebafecd1daa2fa676468242ce1697973c08c0761bb0bb7ca70c244a" Jan 03 03:34:31 crc kubenswrapper[4746]: I0103 03:34:31.182273 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c" Jan 03 03:34:31 crc kubenswrapper[4746]: I0103 03:34:31.203435 4746 scope.go:117] "RemoveContainer" containerID="16ae517f1177bad321fbcf029c22bc10fdafcd5ddb3eb99beeb467fa802971ee" Jan 03 03:34:31 crc kubenswrapper[4746]: I0103 03:34:31.204072 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c"] Jan 03 03:34:31 crc kubenswrapper[4746]: I0103 03:34:31.214140 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-56bf966488-wb74c"] Jan 03 03:34:31 crc kubenswrapper[4746]: I0103 03:34:31.219849 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4"] Jan 03 03:34:31 crc kubenswrapper[4746]: I0103 03:34:31.225961 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-worker-6c6bcc9bcc-zq8s4"] Jan 03 03:34:32 crc kubenswrapper[4746]: I0103 03:34:32.473008 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f26b869-52a1-48c3-9c08-ab841aa265ed" path="/var/lib/kubelet/pods/3f26b869-52a1-48c3-9c08-ab841aa265ed/volumes" Jan 03 03:34:32 crc kubenswrapper[4746]: I0103 03:34:32.474271 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68b5ec47-2fd9-4db7-97da-e216d788f047" path="/var/lib/kubelet/pods/68b5ec47-2fd9-4db7-97da-e216d788f047/volumes" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.334216 4746 generic.go:334] "Generic (PLEG): container finished" podID="ab76b5d0-1f31-473c-b225-5e7a79fd8416" containerID="4500195ac59180729b0eab2f18b54f9287036600fb7dc514444b6cf72c310476" exitCode=137 Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.334381 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" event={"ID":"ab76b5d0-1f31-473c-b225-5e7a79fd8416","Type":"ContainerDied","Data":"4500195ac59180729b0eab2f18b54f9287036600fb7dc514444b6cf72c310476"} Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.336406 4746 generic.go:334] "Generic (PLEG): container finished" podID="facce79f-e3fb-4862-8e13-4d77b81d2205" containerID="d605a9b27512dec4d2fd813f4205f6daf28f8e916c1870e230e71184f0f54d04" exitCode=137 Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.336444 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" event={"ID":"facce79f-e3fb-4862-8e13-4d77b81d2205","Type":"ContainerDied","Data":"d605a9b27512dec4d2fd813f4205f6daf28f8e916c1870e230e71184f0f54d04"} Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.424809 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.429395 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.587306 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-combined-ca-bundle\") pod \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.587868 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-config-data-custom\") pod \"facce79f-e3fb-4862-8e13-4d77b81d2205\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.588008 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-combined-ca-bundle\") pod \"facce79f-e3fb-4862-8e13-4d77b81d2205\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.588099 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds6hl\" (UniqueName: \"kubernetes.io/projected/ab76b5d0-1f31-473c-b225-5e7a79fd8416-kube-api-access-ds6hl\") pod \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.588186 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-config-data-custom\") pod \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.588288 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-config-data\") pod \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.588386 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/facce79f-e3fb-4862-8e13-4d77b81d2205-logs\") pod \"facce79f-e3fb-4862-8e13-4d77b81d2205\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.588476 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-config-data\") pod \"facce79f-e3fb-4862-8e13-4d77b81d2205\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.588550 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx499\" (UniqueName: \"kubernetes.io/projected/facce79f-e3fb-4862-8e13-4d77b81d2205-kube-api-access-gx499\") pod \"facce79f-e3fb-4862-8e13-4d77b81d2205\" (UID: \"facce79f-e3fb-4862-8e13-4d77b81d2205\") " Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.588669 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab76b5d0-1f31-473c-b225-5e7a79fd8416-logs\") pod \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\" (UID: \"ab76b5d0-1f31-473c-b225-5e7a79fd8416\") " Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.589261 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab76b5d0-1f31-473c-b225-5e7a79fd8416-logs" (OuterVolumeSpecName: "logs") pod "ab76b5d0-1f31-473c-b225-5e7a79fd8416" (UID: "ab76b5d0-1f31-473c-b225-5e7a79fd8416"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.588997 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/facce79f-e3fb-4862-8e13-4d77b81d2205-logs" (OuterVolumeSpecName: "logs") pod "facce79f-e3fb-4862-8e13-4d77b81d2205" (UID: "facce79f-e3fb-4862-8e13-4d77b81d2205"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.593536 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab76b5d0-1f31-473c-b225-5e7a79fd8416-kube-api-access-ds6hl" (OuterVolumeSpecName: "kube-api-access-ds6hl") pod "ab76b5d0-1f31-473c-b225-5e7a79fd8416" (UID: "ab76b5d0-1f31-473c-b225-5e7a79fd8416"). InnerVolumeSpecName "kube-api-access-ds6hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.597403 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/facce79f-e3fb-4862-8e13-4d77b81d2205-kube-api-access-gx499" (OuterVolumeSpecName: "kube-api-access-gx499") pod "facce79f-e3fb-4862-8e13-4d77b81d2205" (UID: "facce79f-e3fb-4862-8e13-4d77b81d2205"). InnerVolumeSpecName "kube-api-access-gx499". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.597608 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "facce79f-e3fb-4862-8e13-4d77b81d2205" (UID: "facce79f-e3fb-4862-8e13-4d77b81d2205"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.597921 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ab76b5d0-1f31-473c-b225-5e7a79fd8416" (UID: "ab76b5d0-1f31-473c-b225-5e7a79fd8416"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.607186 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "facce79f-e3fb-4862-8e13-4d77b81d2205" (UID: "facce79f-e3fb-4862-8e13-4d77b81d2205"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.613559 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab76b5d0-1f31-473c-b225-5e7a79fd8416" (UID: "ab76b5d0-1f31-473c-b225-5e7a79fd8416"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.619624 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-config-data" (OuterVolumeSpecName: "config-data") pod "ab76b5d0-1f31-473c-b225-5e7a79fd8416" (UID: "ab76b5d0-1f31-473c-b225-5e7a79fd8416"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.620927 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-config-data" (OuterVolumeSpecName: "config-data") pod "facce79f-e3fb-4862-8e13-4d77b81d2205" (UID: "facce79f-e3fb-4862-8e13-4d77b81d2205"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.690177 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.690203 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds6hl\" (UniqueName: \"kubernetes.io/projected/ab76b5d0-1f31-473c-b225-5e7a79fd8416-kube-api-access-ds6hl\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.690241 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.690261 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.690270 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/facce79f-e3fb-4862-8e13-4d77b81d2205-logs\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.690279 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.690289 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx499\" (UniqueName: \"kubernetes.io/projected/facce79f-e3fb-4862-8e13-4d77b81d2205-kube-api-access-gx499\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.690298 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab76b5d0-1f31-473c-b225-5e7a79fd8416-logs\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.690306 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab76b5d0-1f31-473c-b225-5e7a79fd8416-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:49 crc kubenswrapper[4746]: I0103 03:34:49.690314 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/facce79f-e3fb-4862-8e13-4d77b81d2205-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.343792 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" event={"ID":"ab76b5d0-1f31-473c-b225-5e7a79fd8416","Type":"ContainerDied","Data":"068296ad03bc1b276c719ac15fd84e2f96b27896089d54dc4a3fa157b768906f"} Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.343865 4746 scope.go:117] "RemoveContainer" containerID="4500195ac59180729b0eab2f18b54f9287036600fb7dc514444b6cf72c310476" Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.343862 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-worker-649968f979-6ntm6" Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.345887 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" event={"ID":"facce79f-e3fb-4862-8e13-4d77b81d2205","Type":"ContainerDied","Data":"93797496535e5e470ba9bacb854a84d96151949aabba7afbbf017c4c2808874c"} Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.345958 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7" Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.365621 4746 scope.go:117] "RemoveContainer" containerID="0dc541d246ab1f5eb9f70e229548c4e830fb637c2b22f8a53235d4c7c296c08b" Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.384698 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-worker-649968f979-6ntm6"] Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.395703 4746 scope.go:117] "RemoveContainer" containerID="d605a9b27512dec4d2fd813f4205f6daf28f8e916c1870e230e71184f0f54d04" Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.396611 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-worker-649968f979-6ntm6"] Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.404572 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7"] Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.412848 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/barbican-keystone-listener-79f4d74dc4-2qwg7"] Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.415516 4746 scope.go:117] "RemoveContainer" containerID="5daa0209abd0ce8e89476e44f935cb47979dc05789c51323d2f6e47435b26208" Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.474065 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab76b5d0-1f31-473c-b225-5e7a79fd8416" path="/var/lib/kubelet/pods/ab76b5d0-1f31-473c-b225-5e7a79fd8416/volumes" Jan 03 03:34:50 crc kubenswrapper[4746]: I0103 03:34:50.474704 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="facce79f-e3fb-4862-8e13-4d77b81d2205" path="/var/lib/kubelet/pods/facce79f-e3fb-4862-8e13-4d77b81d2205/volumes" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.704032 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/keystone-bootstrap-brhg2"] Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.716724 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/keystone-bootstrap-brhg2"] Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.723497 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/keystone-5794645689-k95gh"] Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.723779 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/keystone-5794645689-k95gh" podUID="0f024828-5253-4134-bd22-720212206aa3" containerName="keystone-api" containerID="cri-o://3203a1e3735d8352f746e109713c8ef7f8f9177307cb32b3d33fe0243ca882a8" gracePeriod=30 Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.728032 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/keystone-db-sync-cnt9k"] Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.734302 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/keystone-db-sync-cnt9k"] Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.771907 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/keystone70a3-account-delete-6vtl2"] Jan 03 03:34:57 crc kubenswrapper[4746]: E0103 03:34:57.772263 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab76b5d0-1f31-473c-b225-5e7a79fd8416" containerName="barbican-worker" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772278 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab76b5d0-1f31-473c-b225-5e7a79fd8416" containerName="barbican-worker" Jan 03 03:34:57 crc kubenswrapper[4746]: E0103 03:34:57.772297 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="facce79f-e3fb-4862-8e13-4d77b81d2205" containerName="barbican-keystone-listener" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772306 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="facce79f-e3fb-4862-8e13-4d77b81d2205" containerName="barbican-keystone-listener" Jan 03 03:34:57 crc kubenswrapper[4746]: E0103 03:34:57.772322 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6122b17a-dabe-4c54-8631-f377d5e4b576" containerName="barbican-api-log" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772332 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6122b17a-dabe-4c54-8631-f377d5e4b576" containerName="barbican-api-log" Jan 03 03:34:57 crc kubenswrapper[4746]: E0103 03:34:57.772346 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b5ec47-2fd9-4db7-97da-e216d788f047" containerName="barbican-keystone-listener-log" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772355 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b5ec47-2fd9-4db7-97da-e216d788f047" containerName="barbican-keystone-listener-log" Jan 03 03:34:57 crc kubenswrapper[4746]: E0103 03:34:57.772374 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab76b5d0-1f31-473c-b225-5e7a79fd8416" containerName="barbican-worker-log" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772383 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab76b5d0-1f31-473c-b225-5e7a79fd8416" containerName="barbican-worker-log" Jan 03 03:34:57 crc kubenswrapper[4746]: E0103 03:34:57.772392 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b5ec47-2fd9-4db7-97da-e216d788f047" containerName="barbican-keystone-listener" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772402 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b5ec47-2fd9-4db7-97da-e216d788f047" containerName="barbican-keystone-listener" Jan 03 03:34:57 crc kubenswrapper[4746]: E0103 03:34:57.772421 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="facce79f-e3fb-4862-8e13-4d77b81d2205" containerName="barbican-keystone-listener-log" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772431 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="facce79f-e3fb-4862-8e13-4d77b81d2205" containerName="barbican-keystone-listener-log" Jan 03 03:34:57 crc kubenswrapper[4746]: E0103 03:34:57.772443 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aed8c8d-5213-480d-b2ab-4306e4e14e3b" containerName="mariadb-account-delete" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772452 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aed8c8d-5213-480d-b2ab-4306e4e14e3b" containerName="mariadb-account-delete" Jan 03 03:34:57 crc kubenswrapper[4746]: E0103 03:34:57.772468 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f26b869-52a1-48c3-9c08-ab841aa265ed" containerName="barbican-worker" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772477 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f26b869-52a1-48c3-9c08-ab841aa265ed" containerName="barbican-worker" Jan 03 03:34:57 crc kubenswrapper[4746]: E0103 03:34:57.772492 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f26b869-52a1-48c3-9c08-ab841aa265ed" containerName="barbican-worker-log" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772503 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f26b869-52a1-48c3-9c08-ab841aa265ed" containerName="barbican-worker-log" Jan 03 03:34:57 crc kubenswrapper[4746]: E0103 03:34:57.772519 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6122b17a-dabe-4c54-8631-f377d5e4b576" containerName="barbican-api" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772528 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6122b17a-dabe-4c54-8631-f377d5e4b576" containerName="barbican-api" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772691 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="facce79f-e3fb-4862-8e13-4d77b81d2205" containerName="barbican-keystone-listener" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772706 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aed8c8d-5213-480d-b2ab-4306e4e14e3b" containerName="mariadb-account-delete" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772716 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6122b17a-dabe-4c54-8631-f377d5e4b576" containerName="barbican-api-log" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772726 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab76b5d0-1f31-473c-b225-5e7a79fd8416" containerName="barbican-worker" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772743 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6122b17a-dabe-4c54-8631-f377d5e4b576" containerName="barbican-api" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772758 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f26b869-52a1-48c3-9c08-ab841aa265ed" containerName="barbican-worker-log" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772770 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="facce79f-e3fb-4862-8e13-4d77b81d2205" containerName="barbican-keystone-listener-log" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772780 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab76b5d0-1f31-473c-b225-5e7a79fd8416" containerName="barbican-worker-log" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772788 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b5ec47-2fd9-4db7-97da-e216d788f047" containerName="barbican-keystone-listener-log" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772800 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f26b869-52a1-48c3-9c08-ab841aa265ed" containerName="barbican-worker" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.772813 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b5ec47-2fd9-4db7-97da-e216d788f047" containerName="barbican-keystone-listener" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.773417 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.777826 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/keystone70a3-account-delete-6vtl2"] Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.808981 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts\") pod \"keystone70a3-account-delete-6vtl2\" (UID: \"67518beb-2963-4548-9d9c-967483f41b00\") " pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.809028 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86cx9\" (UniqueName: \"kubernetes.io/projected/67518beb-2963-4548-9d9c-967483f41b00-kube-api-access-86cx9\") pod \"keystone70a3-account-delete-6vtl2\" (UID: \"67518beb-2963-4548-9d9c-967483f41b00\") " pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.910096 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts\") pod \"keystone70a3-account-delete-6vtl2\" (UID: \"67518beb-2963-4548-9d9c-967483f41b00\") " pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.910337 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86cx9\" (UniqueName: \"kubernetes.io/projected/67518beb-2963-4548-9d9c-967483f41b00-kube-api-access-86cx9\") pod \"keystone70a3-account-delete-6vtl2\" (UID: \"67518beb-2963-4548-9d9c-967483f41b00\") " pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.911019 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts\") pod \"keystone70a3-account-delete-6vtl2\" (UID: \"67518beb-2963-4548-9d9c-967483f41b00\") " pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" Jan 03 03:34:57 crc kubenswrapper[4746]: I0103 03:34:57.934008 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86cx9\" (UniqueName: \"kubernetes.io/projected/67518beb-2963-4548-9d9c-967483f41b00-kube-api-access-86cx9\") pod \"keystone70a3-account-delete-6vtl2\" (UID: \"67518beb-2963-4548-9d9c-967483f41b00\") " pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.090018 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.476817 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="682013af-1eb1-44f3-b23a-b63a262a94ba" path="/var/lib/kubelet/pods/682013af-1eb1-44f3-b23a-b63a262a94ba/volumes" Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.477754 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c34c1d-632f-406a-a5ae-1ce804ef6f66" path="/var/lib/kubelet/pods/c8c34c1d-632f-406a-a5ae-1ce804ef6f66/volumes" Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.484331 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/keystone70a3-account-delete-6vtl2"] Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.550339 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/root-account-create-update-q7xkl"] Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.559750 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/root-account-create-update-q7xkl"] Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.571770 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["barbican-kuttl-tests/root-account-create-update-sg72m"] Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.572650 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/root-account-create-update-sg72m" Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.576738 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/root-account-create-update-sg72m"] Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.618974 4746 reflector.go:368] Caches populated for *v1.Secret from object-"barbican-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.623113 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv6x9\" (UniqueName: \"kubernetes.io/projected/6b65754e-dbed-4da4-87eb-424d7473d5c3-kube-api-access-qv6x9\") pod \"root-account-create-update-sg72m\" (UID: \"6b65754e-dbed-4da4-87eb-424d7473d5c3\") " pod="barbican-kuttl-tests/root-account-create-update-sg72m" Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.623211 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b65754e-dbed-4da4-87eb-424d7473d5c3-operator-scripts\") pod \"root-account-create-update-sg72m\" (UID: \"6b65754e-dbed-4da4-87eb-424d7473d5c3\") " pod="barbican-kuttl-tests/root-account-create-update-sg72m" Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.664811 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/openstack-galera-1"] Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.668323 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/openstack-galera-2"] Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.675510 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/openstack-galera-0"] Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.686934 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/root-account-create-update-sg72m"] Jan 03 03:34:58 crc kubenswrapper[4746]: E0103 03:34:58.698029 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-qv6x9 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="barbican-kuttl-tests/root-account-create-update-sg72m" podUID="6b65754e-dbed-4da4-87eb-424d7473d5c3" Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.724604 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv6x9\" (UniqueName: \"kubernetes.io/projected/6b65754e-dbed-4da4-87eb-424d7473d5c3-kube-api-access-qv6x9\") pod \"root-account-create-update-sg72m\" (UID: \"6b65754e-dbed-4da4-87eb-424d7473d5c3\") " pod="barbican-kuttl-tests/root-account-create-update-sg72m" Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.725459 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b65754e-dbed-4da4-87eb-424d7473d5c3-operator-scripts\") pod \"root-account-create-update-sg72m\" (UID: \"6b65754e-dbed-4da4-87eb-424d7473d5c3\") " pod="barbican-kuttl-tests/root-account-create-update-sg72m" Jan 03 03:34:58 crc kubenswrapper[4746]: E0103 03:34:58.725575 4746 configmap.go:193] Couldn't get configMap barbican-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 03 03:34:58 crc kubenswrapper[4746]: E0103 03:34:58.725638 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b65754e-dbed-4da4-87eb-424d7473d5c3-operator-scripts podName:6b65754e-dbed-4da4-87eb-424d7473d5c3 nodeName:}" failed. No retries permitted until 2026-01-03 03:34:59.225620674 +0000 UTC m=+1219.075510979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6b65754e-dbed-4da4-87eb-424d7473d5c3-operator-scripts") pod "root-account-create-update-sg72m" (UID: "6b65754e-dbed-4da4-87eb-424d7473d5c3") : configmap "openstack-scripts" not found Jan 03 03:34:58 crc kubenswrapper[4746]: E0103 03:34:58.729499 4746 projected.go:194] Error preparing data for projected volume kube-api-access-qv6x9 for pod barbican-kuttl-tests/root-account-create-update-sg72m: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 03 03:34:58 crc kubenswrapper[4746]: E0103 03:34:58.729562 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b65754e-dbed-4da4-87eb-424d7473d5c3-kube-api-access-qv6x9 podName:6b65754e-dbed-4da4-87eb-424d7473d5c3 nodeName:}" failed. No retries permitted until 2026-01-03 03:34:59.22954691 +0000 UTC m=+1219.079437215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qv6x9" (UniqueName: "kubernetes.io/projected/6b65754e-dbed-4da4-87eb-424d7473d5c3-kube-api-access-qv6x9") pod "root-account-create-update-sg72m" (UID: "6b65754e-dbed-4da4-87eb-424d7473d5c3") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 03 03:34:58 crc kubenswrapper[4746]: I0103 03:34:58.824203 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/openstack-galera-2" podUID="0a58aeed-241f-4361-8570-043366a4a146" containerName="galera" containerID="cri-o://825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88" gracePeriod=30 Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.233762 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b65754e-dbed-4da4-87eb-424d7473d5c3-operator-scripts\") pod \"root-account-create-update-sg72m\" (UID: \"6b65754e-dbed-4da4-87eb-424d7473d5c3\") " pod="barbican-kuttl-tests/root-account-create-update-sg72m" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.233995 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv6x9\" (UniqueName: \"kubernetes.io/projected/6b65754e-dbed-4da4-87eb-424d7473d5c3-kube-api-access-qv6x9\") pod \"root-account-create-update-sg72m\" (UID: \"6b65754e-dbed-4da4-87eb-424d7473d5c3\") " pod="barbican-kuttl-tests/root-account-create-update-sg72m" Jan 03 03:34:59 crc kubenswrapper[4746]: E0103 03:34:59.234139 4746 configmap.go:193] Couldn't get configMap barbican-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 03 03:34:59 crc kubenswrapper[4746]: E0103 03:34:59.234281 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b65754e-dbed-4da4-87eb-424d7473d5c3-operator-scripts podName:6b65754e-dbed-4da4-87eb-424d7473d5c3 nodeName:}" failed. No retries permitted until 2026-01-03 03:35:00.234261324 +0000 UTC m=+1220.084151629 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6b65754e-dbed-4da4-87eb-424d7473d5c3-operator-scripts") pod "root-account-create-update-sg72m" (UID: "6b65754e-dbed-4da4-87eb-424d7473d5c3") : configmap "openstack-scripts" not found Jan 03 03:34:59 crc kubenswrapper[4746]: E0103 03:34:59.237191 4746 projected.go:194] Error preparing data for projected volume kube-api-access-qv6x9 for pod barbican-kuttl-tests/root-account-create-update-sg72m: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 03 03:34:59 crc kubenswrapper[4746]: E0103 03:34:59.237258 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b65754e-dbed-4da4-87eb-424d7473d5c3-kube-api-access-qv6x9 podName:6b65754e-dbed-4da4-87eb-424d7473d5c3 nodeName:}" failed. No retries permitted until 2026-01-03 03:35:00.237244607 +0000 UTC m=+1220.087134912 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qv6x9" (UniqueName: "kubernetes.io/projected/6b65754e-dbed-4da4-87eb-424d7473d5c3-kube-api-access-qv6x9") pod "root-account-create-update-sg72m" (UID: "6b65754e-dbed-4da4-87eb-424d7473d5c3") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.264289 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/memcached-0"] Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.264732 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/memcached-0" podUID="829fb3d2-d144-42b6-9e2c-493ae34fdf6a" containerName="memcached" containerID="cri-o://3625c9ce57252a3d32952a5b650a167d0c04ed4aa1946b9032825c6e9970035c" gracePeriod=30 Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.419230 4746 generic.go:334] "Generic (PLEG): container finished" podID="67518beb-2963-4548-9d9c-967483f41b00" containerID="de0dea9548acb030391b371f168ebc62ede49e5fb5468e3d2d1e1d303aada1ee" exitCode=1 Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.419311 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/root-account-create-update-sg72m" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.420242 4746 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" secret="" err="secret \"galera-openstack-dockercfg-82mft\" not found" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.420278 4746 scope.go:117] "RemoveContainer" containerID="de0dea9548acb030391b371f168ebc62ede49e5fb5468e3d2d1e1d303aada1ee" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.420813 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" event={"ID":"67518beb-2963-4548-9d9c-967483f41b00","Type":"ContainerDied","Data":"de0dea9548acb030391b371f168ebc62ede49e5fb5468e3d2d1e1d303aada1ee"} Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.420854 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" event={"ID":"67518beb-2963-4548-9d9c-967483f41b00","Type":"ContainerStarted","Data":"658f49874f2e6bddba7e61eb09f0b7c88fda0c5a6685a1d6dd681b1e6d95ce5b"} Jan 03 03:34:59 crc kubenswrapper[4746]: E0103 03:34:59.437223 4746 configmap.go:193] Couldn't get configMap barbican-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 03 03:34:59 crc kubenswrapper[4746]: E0103 03:34:59.437321 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts podName:67518beb-2963-4548-9d9c-967483f41b00 nodeName:}" failed. No retries permitted until 2026-01-03 03:34:59.937299657 +0000 UTC m=+1219.787189962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts") pod "keystone70a3-account-delete-6vtl2" (UID: "67518beb-2963-4548-9d9c-967483f41b00") : configmap "openstack-scripts" not found Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.525496 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/root-account-create-update-sg72m" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.674054 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.737626 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["barbican-kuttl-tests/rabbitmq-server-0"] Jan 03 03:34:59 crc kubenswrapper[4746]: E0103 03:34:59.801266 4746 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.66:56060->38.102.83.66:38361: write tcp 38.102.83.66:56060->38.102.83.66:38361: write: broken pipe Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.840878 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a58aeed-241f-4361-8570-043366a4a146-config-data-generated\") pod \"0a58aeed-241f-4361-8570-043366a4a146\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.840922 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"0a58aeed-241f-4361-8570-043366a4a146\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.840955 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-kolla-config\") pod \"0a58aeed-241f-4361-8570-043366a4a146\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.840974 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-operator-scripts\") pod \"0a58aeed-241f-4361-8570-043366a4a146\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.840999 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59srz\" (UniqueName: \"kubernetes.io/projected/0a58aeed-241f-4361-8570-043366a4a146-kube-api-access-59srz\") pod \"0a58aeed-241f-4361-8570-043366a4a146\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.841072 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-config-data-default\") pod \"0a58aeed-241f-4361-8570-043366a4a146\" (UID: \"0a58aeed-241f-4361-8570-043366a4a146\") " Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.841285 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a58aeed-241f-4361-8570-043366a4a146-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "0a58aeed-241f-4361-8570-043366a4a146" (UID: "0a58aeed-241f-4361-8570-043366a4a146"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.841526 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0a58aeed-241f-4361-8570-043366a4a146-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.841676 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "0a58aeed-241f-4361-8570-043366a4a146" (UID: "0a58aeed-241f-4361-8570-043366a4a146"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.841722 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a58aeed-241f-4361-8570-043366a4a146" (UID: "0a58aeed-241f-4361-8570-043366a4a146"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.842060 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "0a58aeed-241f-4361-8570-043366a4a146" (UID: "0a58aeed-241f-4361-8570-043366a4a146"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.845358 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a58aeed-241f-4361-8570-043366a4a146-kube-api-access-59srz" (OuterVolumeSpecName: "kube-api-access-59srz") pod "0a58aeed-241f-4361-8570-043366a4a146" (UID: "0a58aeed-241f-4361-8570-043366a4a146"). InnerVolumeSpecName "kube-api-access-59srz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.849376 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "0a58aeed-241f-4361-8570-043366a4a146" (UID: "0a58aeed-241f-4361-8570-043366a4a146"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.943204 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.943291 4746 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.943308 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.943321 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59srz\" (UniqueName: \"kubernetes.io/projected/0a58aeed-241f-4361-8570-043366a4a146-kube-api-access-59srz\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.943337 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0a58aeed-241f-4361-8570-043366a4a146-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 03 03:34:59 crc kubenswrapper[4746]: E0103 03:34:59.943313 4746 configmap.go:193] Couldn't get configMap barbican-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 03 03:34:59 crc kubenswrapper[4746]: E0103 03:34:59.943407 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts podName:67518beb-2963-4548-9d9c-967483f41b00 nodeName:}" failed. No retries permitted until 2026-01-03 03:35:00.943388435 +0000 UTC m=+1220.793278740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts") pod "keystone70a3-account-delete-6vtl2" (UID: "67518beb-2963-4548-9d9c-967483f41b00") : configmap "openstack-scripts" not found Jan 03 03:34:59 crc kubenswrapper[4746]: I0103 03:34:59.955364 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.044455 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.107419 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/rabbitmq-server-0"] Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.253003 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv6x9\" (UniqueName: \"kubernetes.io/projected/6b65754e-dbed-4da4-87eb-424d7473d5c3-kube-api-access-qv6x9\") pod \"root-account-create-update-sg72m\" (UID: \"6b65754e-dbed-4da4-87eb-424d7473d5c3\") " pod="barbican-kuttl-tests/root-account-create-update-sg72m" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.253295 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b65754e-dbed-4da4-87eb-424d7473d5c3-operator-scripts\") pod \"root-account-create-update-sg72m\" (UID: \"6b65754e-dbed-4da4-87eb-424d7473d5c3\") " pod="barbican-kuttl-tests/root-account-create-update-sg72m" Jan 03 03:35:00 crc kubenswrapper[4746]: E0103 03:35:00.253737 4746 configmap.go:193] Couldn't get configMap barbican-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 03 03:35:00 crc kubenswrapper[4746]: E0103 03:35:00.253817 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b65754e-dbed-4da4-87eb-424d7473d5c3-operator-scripts podName:6b65754e-dbed-4da4-87eb-424d7473d5c3 nodeName:}" failed. No retries permitted until 2026-01-03 03:35:02.253799499 +0000 UTC m=+1222.103689804 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6b65754e-dbed-4da4-87eb-424d7473d5c3-operator-scripts") pod "root-account-create-update-sg72m" (UID: "6b65754e-dbed-4da4-87eb-424d7473d5c3") : configmap "openstack-scripts" not found Jan 03 03:35:00 crc kubenswrapper[4746]: E0103 03:35:00.256758 4746 projected.go:194] Error preparing data for projected volume kube-api-access-qv6x9 for pod barbican-kuttl-tests/root-account-create-update-sg72m: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 03 03:35:00 crc kubenswrapper[4746]: E0103 03:35:00.256849 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b65754e-dbed-4da4-87eb-424d7473d5c3-kube-api-access-qv6x9 podName:6b65754e-dbed-4da4-87eb-424d7473d5c3 nodeName:}" failed. No retries permitted until 2026-01-03 03:35:02.256828923 +0000 UTC m=+1222.106719228 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qv6x9" (UniqueName: "kubernetes.io/projected/6b65754e-dbed-4da4-87eb-424d7473d5c3-kube-api-access-qv6x9") pod "root-account-create-update-sg72m" (UID: "6b65754e-dbed-4da4-87eb-424d7473d5c3") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.428595 4746 generic.go:334] "Generic (PLEG): container finished" podID="829fb3d2-d144-42b6-9e2c-493ae34fdf6a" containerID="3625c9ce57252a3d32952a5b650a167d0c04ed4aa1946b9032825c6e9970035c" exitCode=0 Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.428700 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/memcached-0" event={"ID":"829fb3d2-d144-42b6-9e2c-493ae34fdf6a","Type":"ContainerDied","Data":"3625c9ce57252a3d32952a5b650a167d0c04ed4aa1946b9032825c6e9970035c"} Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.430455 4746 generic.go:334] "Generic (PLEG): container finished" podID="67518beb-2963-4548-9d9c-967483f41b00" containerID="6581387bd8ec073f61b21a891f4628e331990a50846be0a6d231587bf2bd8696" exitCode=1 Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.430534 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" event={"ID":"67518beb-2963-4548-9d9c-967483f41b00","Type":"ContainerDied","Data":"6581387bd8ec073f61b21a891f4628e331990a50846be0a6d231587bf2bd8696"} Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.430598 4746 scope.go:117] "RemoveContainer" containerID="de0dea9548acb030391b371f168ebc62ede49e5fb5468e3d2d1e1d303aada1ee" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.431503 4746 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" secret="" err="secret \"galera-openstack-dockercfg-82mft\" not found" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.431584 4746 scope.go:117] "RemoveContainer" containerID="6581387bd8ec073f61b21a891f4628e331990a50846be0a6d231587bf2bd8696" Jan 03 03:35:00 crc kubenswrapper[4746]: E0103 03:35:00.431930 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystone70a3-account-delete-6vtl2_barbican-kuttl-tests(67518beb-2963-4548-9d9c-967483f41b00)\"" pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" podUID="67518beb-2963-4548-9d9c-967483f41b00" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.432771 4746 generic.go:334] "Generic (PLEG): container finished" podID="0a58aeed-241f-4361-8570-043366a4a146" containerID="825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88" exitCode=0 Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.433147 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/openstack-galera-2" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.441845 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/root-account-create-update-sg72m" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.444161 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-2" event={"ID":"0a58aeed-241f-4361-8570-043366a4a146","Type":"ContainerDied","Data":"825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88"} Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.444297 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-2" event={"ID":"0a58aeed-241f-4361-8570-043366a4a146","Type":"ContainerDied","Data":"0bdb065a9117b3d443635acc904f0b743c9935bb779a30726611e66de92db38f"} Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.462517 4746 scope.go:117] "RemoveContainer" containerID="825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.488858 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9" path="/var/lib/kubelet/pods/7e84b37b-8dc8-4a4c-bb3d-2708cf7d56e9/volumes" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.489365 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/openstack-galera-2"] Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.489387 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/openstack-galera-2"] Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.503500 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/root-account-create-update-sg72m"] Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.507444 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/root-account-create-update-sg72m"] Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.525456 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/rabbitmq-server-0" podUID="2615393c-ec92-4378-9eb7-4a5043a44bb6" containerName="rabbitmq" containerID="cri-o://fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30" gracePeriod=604800 Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.525914 4746 scope.go:117] "RemoveContainer" containerID="5777d2f68feb02bb9241dc831551ce9aec835f696c08c427127f44dde71c2372" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.552422 4746 scope.go:117] "RemoveContainer" containerID="825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88" Jan 03 03:35:00 crc kubenswrapper[4746]: E0103 03:35:00.552899 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88\": container with ID starting with 825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88 not found: ID does not exist" containerID="825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.552947 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88"} err="failed to get container status \"825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88\": rpc error: code = NotFound desc = could not find container \"825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88\": container with ID starting with 825daa2bdeaa611d2191b1efd8b2850b8d1ef67b2b6a42daa286ffcb85787c88 not found: ID does not exist" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.552975 4746 scope.go:117] "RemoveContainer" containerID="5777d2f68feb02bb9241dc831551ce9aec835f696c08c427127f44dde71c2372" Jan 03 03:35:00 crc kubenswrapper[4746]: E0103 03:35:00.553272 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5777d2f68feb02bb9241dc831551ce9aec835f696c08c427127f44dde71c2372\": container with ID starting with 5777d2f68feb02bb9241dc831551ce9aec835f696c08c427127f44dde71c2372 not found: ID does not exist" containerID="5777d2f68feb02bb9241dc831551ce9aec835f696c08c427127f44dde71c2372" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.553316 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5777d2f68feb02bb9241dc831551ce9aec835f696c08c427127f44dde71c2372"} err="failed to get container status \"5777d2f68feb02bb9241dc831551ce9aec835f696c08c427127f44dde71c2372\": rpc error: code = NotFound desc = could not find container \"5777d2f68feb02bb9241dc831551ce9aec835f696c08c427127f44dde71c2372\": container with ID starting with 5777d2f68feb02bb9241dc831551ce9aec835f696c08c427127f44dde71c2372 not found: ID does not exist" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.558762 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b65754e-dbed-4da4-87eb-424d7473d5c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.558787 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv6x9\" (UniqueName: \"kubernetes.io/projected/6b65754e-dbed-4da4-87eb-424d7473d5c3-kube-api-access-qv6x9\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.660424 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/memcached-0" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.760113 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-kolla-config\") pod \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\" (UID: \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\") " Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.760170 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66x2k\" (UniqueName: \"kubernetes.io/projected/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-kube-api-access-66x2k\") pod \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\" (UID: \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\") " Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.760197 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-config-data\") pod \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\" (UID: \"829fb3d2-d144-42b6-9e2c-493ae34fdf6a\") " Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.760744 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "829fb3d2-d144-42b6-9e2c-493ae34fdf6a" (UID: "829fb3d2-d144-42b6-9e2c-493ae34fdf6a"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.760801 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-config-data" (OuterVolumeSpecName: "config-data") pod "829fb3d2-d144-42b6-9e2c-493ae34fdf6a" (UID: "829fb3d2-d144-42b6-9e2c-493ae34fdf6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.769846 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-kube-api-access-66x2k" (OuterVolumeSpecName: "kube-api-access-66x2k") pod "829fb3d2-d144-42b6-9e2c-493ae34fdf6a" (UID: "829fb3d2-d144-42b6-9e2c-493ae34fdf6a"). InnerVolumeSpecName "kube-api-access-66x2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.861417 4746 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.861452 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66x2k\" (UniqueName: \"kubernetes.io/projected/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-kube-api-access-66x2k\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.861462 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/829fb3d2-d144-42b6-9e2c-493ae34fdf6a-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.913498 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx"] Jan 03 03:35:00 crc kubenswrapper[4746]: I0103 03:35:00.913780 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" podUID="f1fcbd3b-57ff-4989-b0ef-19fe9df21d87" containerName="manager" containerID="cri-o://8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5" gracePeriod=10 Jan 03 03:35:00 crc kubenswrapper[4746]: E0103 03:35:00.968725 4746 configmap.go:193] Couldn't get configMap barbican-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 03 03:35:00 crc kubenswrapper[4746]: E0103 03:35:00.968788 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts podName:67518beb-2963-4548-9d9c-967483f41b00 nodeName:}" failed. No retries permitted until 2026-01-03 03:35:02.968774192 +0000 UTC m=+1222.818664497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts") pod "keystone70a3-account-delete-6vtl2" (UID: "67518beb-2963-4548-9d9c-967483f41b00") : configmap "openstack-scripts" not found Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.013385 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/openstack-galera-1" podUID="3b854ff9-97ba-4cd2-9136-db9e311d5e94" containerName="galera" containerID="cri-o://0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318" gracePeriod=28 Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.158429 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-index-wdcpb"] Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.159684 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/barbican-operator-index-wdcpb" podUID="35d592dd-baad-44d9-9fc0-3eab11cea0b4" containerName="registry-server" containerID="cri-o://10b33a74d0610f3bfff929006acf8d039fd15cd2d04047cc256b69d7447b0975" gracePeriod=30 Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.217857 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk"] Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.226939 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/227209b9b6951233cad6654f32e6cb7fe537ee5336df7f10129e790fab75qdk"] Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.363087 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.454209 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.455215 4746 generic.go:334] "Generic (PLEG): container finished" podID="0f024828-5253-4134-bd22-720212206aa3" containerID="3203a1e3735d8352f746e109713c8ef7f8f9177307cb32b3d33fe0243ca882a8" exitCode=0 Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.455290 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-5794645689-k95gh" event={"ID":"0f024828-5253-4134-bd22-720212206aa3","Type":"ContainerDied","Data":"3203a1e3735d8352f746e109713c8ef7f8f9177307cb32b3d33fe0243ca882a8"} Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.455337 4746 scope.go:117] "RemoveContainer" containerID="3203a1e3735d8352f746e109713c8ef7f8f9177307cb32b3d33fe0243ca882a8" Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.461068 4746 generic.go:334] "Generic (PLEG): container finished" podID="35d592dd-baad-44d9-9fc0-3eab11cea0b4" containerID="10b33a74d0610f3bfff929006acf8d039fd15cd2d04047cc256b69d7447b0975" exitCode=0 Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.461119 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-wdcpb" event={"ID":"35d592dd-baad-44d9-9fc0-3eab11cea0b4","Type":"ContainerDied","Data":"10b33a74d0610f3bfff929006acf8d039fd15cd2d04047cc256b69d7447b0975"} Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.463118 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/memcached-0" event={"ID":"829fb3d2-d144-42b6-9e2c-493ae34fdf6a","Type":"ContainerDied","Data":"1333758e74778e8c69f43f13ed72dae4f77dc4ed7dd4e6707a0238f7ff3d3d22"} Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.463144 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/memcached-0" Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.465602 4746 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" secret="" err="secret \"galera-openstack-dockercfg-82mft\" not found" Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.465675 4746 scope.go:117] "RemoveContainer" containerID="6581387bd8ec073f61b21a891f4628e331990a50846be0a6d231587bf2bd8696" Jan 03 03:35:01 crc kubenswrapper[4746]: E0103 03:35:01.465800 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-delete\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-delete pod=keystone70a3-account-delete-6vtl2_barbican-kuttl-tests(67518beb-2963-4548-9d9c-967483f41b00)\"" pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" podUID="67518beb-2963-4548-9d9c-967483f41b00" Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.478325 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-webhook-cert\") pod \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\" (UID: \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\") " Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.478396 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-apiservice-cert\") pod \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\" (UID: \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\") " Jan 03 03:35:01 crc kubenswrapper[4746]: I0103 03:35:01.478453 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5g95\" (UniqueName: \"kubernetes.io/projected/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-kube-api-access-l5g95\") pod \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\" (UID: \"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.488020 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f1fcbd3b-57ff-4989-b0ef-19fe9df21d87" (UID: "f1fcbd3b-57ff-4989-b0ef-19fe9df21d87"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.497471 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-kube-api-access-l5g95" (OuterVolumeSpecName: "kube-api-access-l5g95") pod "f1fcbd3b-57ff-4989-b0ef-19fe9df21d87" (UID: "f1fcbd3b-57ff-4989-b0ef-19fe9df21d87"). InnerVolumeSpecName "kube-api-access-l5g95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.497565 4746 scope.go:117] "RemoveContainer" containerID="3625c9ce57252a3d32952a5b650a167d0c04ed4aa1946b9032825c6e9970035c" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.501250 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "f1fcbd3b-57ff-4989-b0ef-19fe9df21d87" (UID: "f1fcbd3b-57ff-4989-b0ef-19fe9df21d87"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.501819 4746 generic.go:334] "Generic (PLEG): container finished" podID="f1fcbd3b-57ff-4989-b0ef-19fe9df21d87" containerID="8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5" exitCode=0 Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.501857 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" event={"ID":"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87","Type":"ContainerDied","Data":"8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5"} Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.501880 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" event={"ID":"f1fcbd3b-57ff-4989-b0ef-19fe9df21d87","Type":"ContainerDied","Data":"1b8965c9a0c1a3b071c4a0b98c61e6281fe9a994b669cc785ee4c0f4eb8d7e3b"} Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.501931 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.579536 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt7tv\" (UniqueName: \"kubernetes.io/projected/0f024828-5253-4134-bd22-720212206aa3-kube-api-access-bt7tv\") pod \"0f024828-5253-4134-bd22-720212206aa3\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.579595 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-fernet-keys\") pod \"0f024828-5253-4134-bd22-720212206aa3\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.579624 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-scripts\") pod \"0f024828-5253-4134-bd22-720212206aa3\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.579703 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-config-data\") pod \"0f024828-5253-4134-bd22-720212206aa3\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.579757 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-credential-keys\") pod \"0f024828-5253-4134-bd22-720212206aa3\" (UID: \"0f024828-5253-4134-bd22-720212206aa3\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.580036 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5g95\" (UniqueName: \"kubernetes.io/projected/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-kube-api-access-l5g95\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.580047 4746 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.580056 4746 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.586639 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0f024828-5253-4134-bd22-720212206aa3" (UID: "0f024828-5253-4134-bd22-720212206aa3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.586817 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-scripts" (OuterVolumeSpecName: "scripts") pod "0f024828-5253-4134-bd22-720212206aa3" (UID: "0f024828-5253-4134-bd22-720212206aa3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.587608 4746 scope.go:117] "RemoveContainer" containerID="8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.590643 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f024828-5253-4134-bd22-720212206aa3-kube-api-access-bt7tv" (OuterVolumeSpecName: "kube-api-access-bt7tv") pod "0f024828-5253-4134-bd22-720212206aa3" (UID: "0f024828-5253-4134-bd22-720212206aa3"). InnerVolumeSpecName "kube-api-access-bt7tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.593456 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0f024828-5253-4134-bd22-720212206aa3" (UID: "0f024828-5253-4134-bd22-720212206aa3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.615394 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-config-data" (OuterVolumeSpecName: "config-data") pod "0f024828-5253-4134-bd22-720212206aa3" (UID: "0f024828-5253-4134-bd22-720212206aa3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.617983 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/memcached-0"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.621703 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-wdcpb" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.626239 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/memcached-0"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.630691 4746 scope.go:117] "RemoveContainer" containerID="8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5" Jan 03 03:35:02 crc kubenswrapper[4746]: E0103 03:35:01.631165 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5\": container with ID starting with 8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5 not found: ID does not exist" containerID="8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.631208 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5"} err="failed to get container status \"8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5\": rpc error: code = NotFound desc = could not find container \"8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5\": container with ID starting with 8521f73c00acd2a5294e1d24db0c0d379e5ad978ac5524dab54c68b6c1bf53b5 not found: ID does not exist" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.632138 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.648384 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5fc9c6ccf4-xgzjx"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.681899 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-config-data\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.681922 4746 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.681933 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt7tv\" (UniqueName: \"kubernetes.io/projected/0f024828-5253-4134-bd22-720212206aa3-kube-api-access-bt7tv\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.681941 4746 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.681950 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f024828-5253-4134-bd22-720212206aa3-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.782825 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hzr8\" (UniqueName: \"kubernetes.io/projected/35d592dd-baad-44d9-9fc0-3eab11cea0b4-kube-api-access-7hzr8\") pod \"35d592dd-baad-44d9-9fc0-3eab11cea0b4\" (UID: \"35d592dd-baad-44d9-9fc0-3eab11cea0b4\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.787690 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d592dd-baad-44d9-9fc0-3eab11cea0b4-kube-api-access-7hzr8" (OuterVolumeSpecName: "kube-api-access-7hzr8") pod "35d592dd-baad-44d9-9fc0-3eab11cea0b4" (UID: "35d592dd-baad-44d9-9fc0-3eab11cea0b4"). InnerVolumeSpecName "kube-api-access-7hzr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.798095 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="barbican-kuttl-tests/openstack-galera-1" podUID="3b854ff9-97ba-4cd2-9136-db9e311d5e94" containerName="galera" probeResult="failure" output="command timed out" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:01.884799 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hzr8\" (UniqueName: \"kubernetes.io/projected/35d592dd-baad-44d9-9fc0-3eab11cea0b4-kube-api-access-7hzr8\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.360414 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.473172 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a58aeed-241f-4361-8570-043366a4a146" path="/var/lib/kubelet/pods/0a58aeed-241f-4361-8570-043366a4a146/volumes" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.474059 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b65754e-dbed-4da4-87eb-424d7473d5c3" path="/var/lib/kubelet/pods/6b65754e-dbed-4da4-87eb-424d7473d5c3/volumes" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.474340 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829fb3d2-d144-42b6-9e2c-493ae34fdf6a" path="/var/lib/kubelet/pods/829fb3d2-d144-42b6-9e2c-493ae34fdf6a/volumes" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.474830 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a980463d-8e17-4fca-bdee-d83282ad9d37" path="/var/lib/kubelet/pods/a980463d-8e17-4fca-bdee-d83282ad9d37/volumes" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.476135 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1fcbd3b-57ff-4989-b0ef-19fe9df21d87" path="/var/lib/kubelet/pods/f1fcbd3b-57ff-4989-b0ef-19fe9df21d87/volumes" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.495362 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-plugins\") pod \"2615393c-ec92-4378-9eb7-4a5043a44bb6\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.495420 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2615393c-ec92-4378-9eb7-4a5043a44bb6-erlang-cookie-secret\") pod \"2615393c-ec92-4378-9eb7-4a5043a44bb6\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.495464 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-erlang-cookie\") pod \"2615393c-ec92-4378-9eb7-4a5043a44bb6\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.495545 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-confd\") pod \"2615393c-ec92-4378-9eb7-4a5043a44bb6\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.495631 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p99gz\" (UniqueName: \"kubernetes.io/projected/2615393c-ec92-4378-9eb7-4a5043a44bb6-kube-api-access-p99gz\") pod \"2615393c-ec92-4378-9eb7-4a5043a44bb6\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.495717 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2615393c-ec92-4378-9eb7-4a5043a44bb6-plugins-conf\") pod \"2615393c-ec92-4378-9eb7-4a5043a44bb6\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.495748 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2615393c-ec92-4378-9eb7-4a5043a44bb6-pod-info\") pod \"2615393c-ec92-4378-9eb7-4a5043a44bb6\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.495925 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\") pod \"2615393c-ec92-4378-9eb7-4a5043a44bb6\" (UID: \"2615393c-ec92-4378-9eb7-4a5043a44bb6\") " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.495951 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2615393c-ec92-4378-9eb7-4a5043a44bb6" (UID: "2615393c-ec92-4378-9eb7-4a5043a44bb6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.496270 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.496625 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2615393c-ec92-4378-9eb7-4a5043a44bb6" (UID: "2615393c-ec92-4378-9eb7-4a5043a44bb6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.496761 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2615393c-ec92-4378-9eb7-4a5043a44bb6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2615393c-ec92-4378-9eb7-4a5043a44bb6" (UID: "2615393c-ec92-4378-9eb7-4a5043a44bb6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.498861 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2615393c-ec92-4378-9eb7-4a5043a44bb6-kube-api-access-p99gz" (OuterVolumeSpecName: "kube-api-access-p99gz") pod "2615393c-ec92-4378-9eb7-4a5043a44bb6" (UID: "2615393c-ec92-4378-9eb7-4a5043a44bb6"). InnerVolumeSpecName "kube-api-access-p99gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.498910 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2615393c-ec92-4378-9eb7-4a5043a44bb6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2615393c-ec92-4378-9eb7-4a5043a44bb6" (UID: "2615393c-ec92-4378-9eb7-4a5043a44bb6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.500495 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2615393c-ec92-4378-9eb7-4a5043a44bb6-pod-info" (OuterVolumeSpecName: "pod-info") pod "2615393c-ec92-4378-9eb7-4a5043a44bb6" (UID: "2615393c-ec92-4378-9eb7-4a5043a44bb6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.511491 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294" (OuterVolumeSpecName: "persistence") pod "2615393c-ec92-4378-9eb7-4a5043a44bb6" (UID: "2615393c-ec92-4378-9eb7-4a5043a44bb6"). InnerVolumeSpecName "pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.514048 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone-5794645689-k95gh" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.516506 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-wdcpb" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.519506 4746 generic.go:334] "Generic (PLEG): container finished" podID="2615393c-ec92-4378-9eb7-4a5043a44bb6" containerID="fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30" exitCode=0 Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.519593 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/rabbitmq-server-0" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.571320 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2615393c-ec92-4378-9eb7-4a5043a44bb6" (UID: "2615393c-ec92-4378-9eb7-4a5043a44bb6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.586979 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone-5794645689-k95gh" event={"ID":"0f024828-5253-4134-bd22-720212206aa3","Type":"ContainerDied","Data":"1093bb8cc5303bbd7fcfbbc1e4899cd63f70770833bd7e10fbfaafa1dcfa12a8"} Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.587026 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-wdcpb" event={"ID":"35d592dd-baad-44d9-9fc0-3eab11cea0b4","Type":"ContainerDied","Data":"0cf5c529e6385ddafee830813a011b9a6cc4f3d8428f16bc02dc43c79d3f82b6"} Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.587048 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/rabbitmq-server-0" event={"ID":"2615393c-ec92-4378-9eb7-4a5043a44bb6","Type":"ContainerDied","Data":"fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30"} Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.587064 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/rabbitmq-server-0" event={"ID":"2615393c-ec92-4378-9eb7-4a5043a44bb6","Type":"ContainerDied","Data":"0ec70d0d0f544f8d31fe675a33ce9ded8e4c171cb88314292f0a835c7429b5dd"} Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.587085 4746 scope.go:117] "RemoveContainer" containerID="10b33a74d0610f3bfff929006acf8d039fd15cd2d04047cc256b69d7447b0975" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.597466 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.597503 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p99gz\" (UniqueName: \"kubernetes.io/projected/2615393c-ec92-4378-9eb7-4a5043a44bb6-kube-api-access-p99gz\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.597517 4746 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2615393c-ec92-4378-9eb7-4a5043a44bb6-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.597527 4746 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2615393c-ec92-4378-9eb7-4a5043a44bb6-pod-info\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.597552 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\") on node \"crc\" " Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.597565 4746 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2615393c-ec92-4378-9eb7-4a5043a44bb6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.597578 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2615393c-ec92-4378-9eb7-4a5043a44bb6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.616069 4746 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.616221 4746 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294") on node "crc" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.616414 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/keystone-5794645689-k95gh"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.618678 4746 scope.go:117] "RemoveContainer" containerID="fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.627329 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/keystone-5794645689-k95gh"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.632971 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-index-wdcpb"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.637620 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/barbican-operator-index-wdcpb"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.674673 4746 scope.go:117] "RemoveContainer" containerID="fb729afc05393548e7b6f79269ac40a0b64ddd42961f76f7f37e8b165d5b442e" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.691149 4746 scope.go:117] "RemoveContainer" containerID="fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30" Jan 03 03:35:02 crc kubenswrapper[4746]: E0103 03:35:02.692781 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30\": container with ID starting with fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30 not found: ID does not exist" containerID="fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.692823 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30"} err="failed to get container status \"fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30\": rpc error: code = NotFound desc = could not find container \"fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30\": container with ID starting with fcf3b90cc3fca54b3aad2a44fc5859d50721e7dbbf4a2eb796126fe799d7db30 not found: ID does not exist" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.692850 4746 scope.go:117] "RemoveContainer" containerID="fb729afc05393548e7b6f79269ac40a0b64ddd42961f76f7f37e8b165d5b442e" Jan 03 03:35:02 crc kubenswrapper[4746]: E0103 03:35:02.693373 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb729afc05393548e7b6f79269ac40a0b64ddd42961f76f7f37e8b165d5b442e\": container with ID starting with fb729afc05393548e7b6f79269ac40a0b64ddd42961f76f7f37e8b165d5b442e not found: ID does not exist" containerID="fb729afc05393548e7b6f79269ac40a0b64ddd42961f76f7f37e8b165d5b442e" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.693397 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb729afc05393548e7b6f79269ac40a0b64ddd42961f76f7f37e8b165d5b442e"} err="failed to get container status \"fb729afc05393548e7b6f79269ac40a0b64ddd42961f76f7f37e8b165d5b442e\": rpc error: code = NotFound desc = could not find container \"fb729afc05393548e7b6f79269ac40a0b64ddd42961f76f7f37e8b165d5b442e\": container with ID starting with fb729afc05393548e7b6f79269ac40a0b64ddd42961f76f7f37e8b165d5b442e not found: ID does not exist" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.698613 4746 reconciler_common.go:293] "Volume detached for volume \"pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab3d88c0-b45c-4664-a5f2-3821da5bd294\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.792632 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/keystone-db-create-shs5n"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.797493 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/keystone-db-create-shs5n"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.801736 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.807913 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/keystone70a3-account-delete-6vtl2"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.813143 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/keystone-70a3-account-create-update-49wmz"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.814436 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.862008 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/rabbitmq-server-0"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.867722 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/rabbitmq-server-0"] Jan 03 03:35:02 crc kubenswrapper[4746]: I0103 03:35:02.968465 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="barbican-kuttl-tests/openstack-galera-0" podUID="c201de48-9fda-488a-9ca1-d6cb8cc085c5" containerName="galera" containerID="cri-o://b9dfb9132a8c309c717cc27f788ee418a82a6964fbb89d7f66834817d1bfa2a8" gracePeriod=26 Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.003300 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.003365 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-config-data-default\") pod \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.003439 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz67l\" (UniqueName: \"kubernetes.io/projected/3b854ff9-97ba-4cd2-9136-db9e311d5e94-kube-api-access-xz67l\") pod \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.003467 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-operator-scripts\") pod \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.003537 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-kolla-config\") pod \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.004167 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "3b854ff9-97ba-4cd2-9136-db9e311d5e94" (UID: "3b854ff9-97ba-4cd2-9136-db9e311d5e94"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.004238 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "3b854ff9-97ba-4cd2-9136-db9e311d5e94" (UID: "3b854ff9-97ba-4cd2-9136-db9e311d5e94"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.004265 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b854ff9-97ba-4cd2-9136-db9e311d5e94" (UID: "3b854ff9-97ba-4cd2-9136-db9e311d5e94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.004316 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3b854ff9-97ba-4cd2-9136-db9e311d5e94-config-data-generated\") pod \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\" (UID: \"3b854ff9-97ba-4cd2-9136-db9e311d5e94\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.004356 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b854ff9-97ba-4cd2-9136-db9e311d5e94-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "3b854ff9-97ba-4cd2-9136-db9e311d5e94" (UID: "3b854ff9-97ba-4cd2-9136-db9e311d5e94"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.004641 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.004673 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.004683 4746 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3b854ff9-97ba-4cd2-9136-db9e311d5e94-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.004692 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3b854ff9-97ba-4cd2-9136-db9e311d5e94-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: E0103 03:35:03.004742 4746 configmap.go:193] Couldn't get configMap barbican-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 03 03:35:03 crc kubenswrapper[4746]: E0103 03:35:03.004779 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts podName:67518beb-2963-4548-9d9c-967483f41b00 nodeName:}" failed. No retries permitted until 2026-01-03 03:35:07.004765406 +0000 UTC m=+1226.854655711 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts") pod "keystone70a3-account-delete-6vtl2" (UID: "67518beb-2963-4548-9d9c-967483f41b00") : configmap "openstack-scripts" not found Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.006833 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b854ff9-97ba-4cd2-9136-db9e311d5e94-kube-api-access-xz67l" (OuterVolumeSpecName: "kube-api-access-xz67l") pod "3b854ff9-97ba-4cd2-9136-db9e311d5e94" (UID: "3b854ff9-97ba-4cd2-9136-db9e311d5e94"). InnerVolumeSpecName "kube-api-access-xz67l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.014742 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "mysql-db") pod "3b854ff9-97ba-4cd2-9136-db9e311d5e94" (UID: "3b854ff9-97ba-4cd2-9136-db9e311d5e94"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.063217 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.105952 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz67l\" (UniqueName: \"kubernetes.io/projected/3b854ff9-97ba-4cd2-9136-db9e311d5e94-kube-api-access-xz67l\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.105995 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.116682 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.207247 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86cx9\" (UniqueName: \"kubernetes.io/projected/67518beb-2963-4548-9d9c-967483f41b00-kube-api-access-86cx9\") pod \"67518beb-2963-4548-9d9c-967483f41b00\" (UID: \"67518beb-2963-4548-9d9c-967483f41b00\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.207318 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts\") pod \"67518beb-2963-4548-9d9c-967483f41b00\" (UID: \"67518beb-2963-4548-9d9c-967483f41b00\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.207714 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.207880 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67518beb-2963-4548-9d9c-967483f41b00" (UID: "67518beb-2963-4548-9d9c-967483f41b00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.210638 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67518beb-2963-4548-9d9c-967483f41b00-kube-api-access-86cx9" (OuterVolumeSpecName: "kube-api-access-86cx9") pod "67518beb-2963-4548-9d9c-967483f41b00" (UID: "67518beb-2963-4548-9d9c-967483f41b00"). InnerVolumeSpecName "kube-api-access-86cx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.308585 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67518beb-2963-4548-9d9c-967483f41b00-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.308898 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86cx9\" (UniqueName: \"kubernetes.io/projected/67518beb-2963-4548-9d9c-967483f41b00-kube-api-access-86cx9\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.531232 4746 generic.go:334] "Generic (PLEG): container finished" podID="3b854ff9-97ba-4cd2-9136-db9e311d5e94" containerID="0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318" exitCode=0 Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.531282 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-1" event={"ID":"3b854ff9-97ba-4cd2-9136-db9e311d5e94","Type":"ContainerDied","Data":"0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318"} Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.531306 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-1" event={"ID":"3b854ff9-97ba-4cd2-9136-db9e311d5e94","Type":"ContainerDied","Data":"5c7f8be799cb70f1a83480b6b4716e6d4ed652f10717dd2f361f688ed024d166"} Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.531323 4746 scope.go:117] "RemoveContainer" containerID="0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.531463 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/openstack-galera-1" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.538985 4746 generic.go:334] "Generic (PLEG): container finished" podID="c201de48-9fda-488a-9ca1-d6cb8cc085c5" containerID="b9dfb9132a8c309c717cc27f788ee418a82a6964fbb89d7f66834817d1bfa2a8" exitCode=0 Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.539024 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-0" event={"ID":"c201de48-9fda-488a-9ca1-d6cb8cc085c5","Type":"ContainerDied","Data":"b9dfb9132a8c309c717cc27f788ee418a82a6964fbb89d7f66834817d1bfa2a8"} Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.551330 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" event={"ID":"67518beb-2963-4548-9d9c-967483f41b00","Type":"ContainerDied","Data":"658f49874f2e6bddba7e61eb09f0b7c88fda0c5a6685a1d6dd681b1e6d95ce5b"} Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.551496 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/keystone70a3-account-delete-6vtl2" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.566867 4746 scope.go:117] "RemoveContainer" containerID="44137e4d944c2589e25d61eaffc5f039cfa5f51d7812a9a7b7370b5da6a35a69" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.567937 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/openstack-galera-1"] Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.575185 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/openstack-galera-1"] Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.588733 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/keystone70a3-account-delete-6vtl2"] Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.592305 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/keystone70a3-account-delete-6vtl2"] Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.604465 4746 scope.go:117] "RemoveContainer" containerID="0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318" Jan 03 03:35:03 crc kubenswrapper[4746]: E0103 03:35:03.605059 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318\": container with ID starting with 0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318 not found: ID does not exist" containerID="0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.605119 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318"} err="failed to get container status \"0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318\": rpc error: code = NotFound desc = could not find container \"0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318\": container with ID starting with 0bee6578c5a97a510ec4c78ae9ac37856cc10018e786f3a3110b29464bfee318 not found: ID does not exist" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.605183 4746 scope.go:117] "RemoveContainer" containerID="44137e4d944c2589e25d61eaffc5f039cfa5f51d7812a9a7b7370b5da6a35a69" Jan 03 03:35:03 crc kubenswrapper[4746]: E0103 03:35:03.611158 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44137e4d944c2589e25d61eaffc5f039cfa5f51d7812a9a7b7370b5da6a35a69\": container with ID starting with 44137e4d944c2589e25d61eaffc5f039cfa5f51d7812a9a7b7370b5da6a35a69 not found: ID does not exist" containerID="44137e4d944c2589e25d61eaffc5f039cfa5f51d7812a9a7b7370b5da6a35a69" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.611191 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44137e4d944c2589e25d61eaffc5f039cfa5f51d7812a9a7b7370b5da6a35a69"} err="failed to get container status \"44137e4d944c2589e25d61eaffc5f039cfa5f51d7812a9a7b7370b5da6a35a69\": rpc error: code = NotFound desc = could not find container \"44137e4d944c2589e25d61eaffc5f039cfa5f51d7812a9a7b7370b5da6a35a69\": container with ID starting with 44137e4d944c2589e25d61eaffc5f039cfa5f51d7812a9a7b7370b5da6a35a69 not found: ID does not exist" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.611212 4746 scope.go:117] "RemoveContainer" containerID="6581387bd8ec073f61b21a891f4628e331990a50846be0a6d231587bf2bd8696" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.661442 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.733245 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.733301 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c201de48-9fda-488a-9ca1-d6cb8cc085c5-config-data-generated\") pod \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.733350 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-kolla-config\") pod \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.733376 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-config-data-default\") pod \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.733412 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8crkr\" (UniqueName: \"kubernetes.io/projected/c201de48-9fda-488a-9ca1-d6cb8cc085c5-kube-api-access-8crkr\") pod \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.733427 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-operator-scripts\") pod \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\" (UID: \"c201de48-9fda-488a-9ca1-d6cb8cc085c5\") " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.734548 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c201de48-9fda-488a-9ca1-d6cb8cc085c5" (UID: "c201de48-9fda-488a-9ca1-d6cb8cc085c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.736326 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c201de48-9fda-488a-9ca1-d6cb8cc085c5" (UID: "c201de48-9fda-488a-9ca1-d6cb8cc085c5"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.737695 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c201de48-9fda-488a-9ca1-d6cb8cc085c5-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "c201de48-9fda-488a-9ca1-d6cb8cc085c5" (UID: "c201de48-9fda-488a-9ca1-d6cb8cc085c5"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.737987 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "c201de48-9fda-488a-9ca1-d6cb8cc085c5" (UID: "c201de48-9fda-488a-9ca1-d6cb8cc085c5"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.739087 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c201de48-9fda-488a-9ca1-d6cb8cc085c5-kube-api-access-8crkr" (OuterVolumeSpecName: "kube-api-access-8crkr") pod "c201de48-9fda-488a-9ca1-d6cb8cc085c5" (UID: "c201de48-9fda-488a-9ca1-d6cb8cc085c5"). InnerVolumeSpecName "kube-api-access-8crkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.752221 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "mysql-db") pod "c201de48-9fda-488a-9ca1-d6cb8cc085c5" (UID: "c201de48-9fda-488a-9ca1-d6cb8cc085c5"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.834629 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8crkr\" (UniqueName: \"kubernetes.io/projected/c201de48-9fda-488a-9ca1-d6cb8cc085c5-kube-api-access-8crkr\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.834675 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.834707 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.834719 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c201de48-9fda-488a-9ca1-d6cb8cc085c5-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.834729 4746 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.834738 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c201de48-9fda-488a-9ca1-d6cb8cc085c5-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.846325 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 03 03:35:03 crc kubenswrapper[4746]: I0103 03:35:03.935725 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.473240 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f024828-5253-4134-bd22-720212206aa3" path="/var/lib/kubelet/pods/0f024828-5253-4134-bd22-720212206aa3/volumes" Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.475278 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2615393c-ec92-4378-9eb7-4a5043a44bb6" path="/var/lib/kubelet/pods/2615393c-ec92-4378-9eb7-4a5043a44bb6/volumes" Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.476154 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35d592dd-baad-44d9-9fc0-3eab11cea0b4" path="/var/lib/kubelet/pods/35d592dd-baad-44d9-9fc0-3eab11cea0b4/volumes" Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.477465 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3603205b-18b7-4254-860a-949ffb13bda2" path="/var/lib/kubelet/pods/3603205b-18b7-4254-860a-949ffb13bda2/volumes" Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.478638 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b854ff9-97ba-4cd2-9136-db9e311d5e94" path="/var/lib/kubelet/pods/3b854ff9-97ba-4cd2-9136-db9e311d5e94/volumes" Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.479438 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67518beb-2963-4548-9d9c-967483f41b00" path="/var/lib/kubelet/pods/67518beb-2963-4548-9d9c-967483f41b00/volumes" Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.480922 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f99f5dd0-17b8-45f8-97f7-6571b6c35ce1" path="/var/lib/kubelet/pods/f99f5dd0-17b8-45f8-97f7-6571b6c35ce1/volumes" Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.560371 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="barbican-kuttl-tests/openstack-galera-0" event={"ID":"c201de48-9fda-488a-9ca1-d6cb8cc085c5","Type":"ContainerDied","Data":"9cb2ab6664d7da24d103b44fc00c4b710fd36df457bd13ac30f100bcb8e3c1e5"} Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.560418 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="barbican-kuttl-tests/openstack-galera-0" Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.560842 4746 scope.go:117] "RemoveContainer" containerID="b9dfb9132a8c309c717cc27f788ee418a82a6964fbb89d7f66834817d1bfa2a8" Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.577561 4746 scope.go:117] "RemoveContainer" containerID="2c6b2e1cef5c9b100a1d4e305294bbb713c9fa4c864e922c3ffacbc2d992507f" Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.578583 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["barbican-kuttl-tests/openstack-galera-0"] Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.582927 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["barbican-kuttl-tests/openstack-galera-0"] Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.646814 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b"] Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.647039 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" podUID="6fc0444c-72a4-4172-ab52-4f24f214486d" containerName="manager" containerID="cri-o://0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67" gracePeriod=10 Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.915212 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-nxg77"] Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.915454 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-nxg77" podUID="b0b84bd0-171e-4129-b0d4-42a68cd8075b" containerName="registry-server" containerID="cri-o://5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140" gracePeriod=30 Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.948025 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw"] Jan 03 03:35:04 crc kubenswrapper[4746]: I0103 03:35:04.950456 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/835551ba8f27f4fd61e1b05ebed5cb285496b645cbb6fd0ac403227c85llflw"] Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.088129 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.275513 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4twfs\" (UniqueName: \"kubernetes.io/projected/6fc0444c-72a4-4172-ab52-4f24f214486d-kube-api-access-4twfs\") pod \"6fc0444c-72a4-4172-ab52-4f24f214486d\" (UID: \"6fc0444c-72a4-4172-ab52-4f24f214486d\") " Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.275935 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6fc0444c-72a4-4172-ab52-4f24f214486d-webhook-cert\") pod \"6fc0444c-72a4-4172-ab52-4f24f214486d\" (UID: \"6fc0444c-72a4-4172-ab52-4f24f214486d\") " Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.276007 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6fc0444c-72a4-4172-ab52-4f24f214486d-apiservice-cert\") pod \"6fc0444c-72a4-4172-ab52-4f24f214486d\" (UID: \"6fc0444c-72a4-4172-ab52-4f24f214486d\") " Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.280701 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fc0444c-72a4-4172-ab52-4f24f214486d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6fc0444c-72a4-4172-ab52-4f24f214486d" (UID: "6fc0444c-72a4-4172-ab52-4f24f214486d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.281387 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc0444c-72a4-4172-ab52-4f24f214486d-kube-api-access-4twfs" (OuterVolumeSpecName: "kube-api-access-4twfs") pod "6fc0444c-72a4-4172-ab52-4f24f214486d" (UID: "6fc0444c-72a4-4172-ab52-4f24f214486d"). InnerVolumeSpecName "kube-api-access-4twfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.282963 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fc0444c-72a4-4172-ab52-4f24f214486d-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "6fc0444c-72a4-4172-ab52-4f24f214486d" (UID: "6fc0444c-72a4-4172-ab52-4f24f214486d"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.289549 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-nxg77" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.378053 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n94nk\" (UniqueName: \"kubernetes.io/projected/b0b84bd0-171e-4129-b0d4-42a68cd8075b-kube-api-access-n94nk\") pod \"b0b84bd0-171e-4129-b0d4-42a68cd8075b\" (UID: \"b0b84bd0-171e-4129-b0d4-42a68cd8075b\") " Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.378275 4746 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6fc0444c-72a4-4172-ab52-4f24f214486d-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.378291 4746 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6fc0444c-72a4-4172-ab52-4f24f214486d-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.378302 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4twfs\" (UniqueName: \"kubernetes.io/projected/6fc0444c-72a4-4172-ab52-4f24f214486d-kube-api-access-4twfs\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.381084 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b84bd0-171e-4129-b0d4-42a68cd8075b-kube-api-access-n94nk" (OuterVolumeSpecName: "kube-api-access-n94nk") pod "b0b84bd0-171e-4129-b0d4-42a68cd8075b" (UID: "b0b84bd0-171e-4129-b0d4-42a68cd8075b"). InnerVolumeSpecName "kube-api-access-n94nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.479612 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n94nk\" (UniqueName: \"kubernetes.io/projected/b0b84bd0-171e-4129-b0d4-42a68cd8075b-kube-api-access-n94nk\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.567978 4746 generic.go:334] "Generic (PLEG): container finished" podID="b0b84bd0-171e-4129-b0d4-42a68cd8075b" containerID="5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140" exitCode=0 Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.568025 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-nxg77" event={"ID":"b0b84bd0-171e-4129-b0d4-42a68cd8075b","Type":"ContainerDied","Data":"5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140"} Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.568073 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-nxg77" event={"ID":"b0b84bd0-171e-4129-b0d4-42a68cd8075b","Type":"ContainerDied","Data":"04668fadb52f8fe7a612c0d09034747d8529cc1701c64b1b2e67fdd17b8dce34"} Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.568071 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-nxg77" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.568130 4746 scope.go:117] "RemoveContainer" containerID="5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.569648 4746 generic.go:334] "Generic (PLEG): container finished" podID="6fc0444c-72a4-4172-ab52-4f24f214486d" containerID="0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67" exitCode=0 Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.569723 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" event={"ID":"6fc0444c-72a4-4172-ab52-4f24f214486d","Type":"ContainerDied","Data":"0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67"} Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.569788 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" event={"ID":"6fc0444c-72a4-4172-ab52-4f24f214486d","Type":"ContainerDied","Data":"8d54a825a5edb6a1299700f9b940b2a5a11aecefec930bebda1c1f2102f59016"} Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.569740 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.594296 4746 scope.go:117] "RemoveContainer" containerID="5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140" Jan 03 03:35:05 crc kubenswrapper[4746]: E0103 03:35:05.594931 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140\": container with ID starting with 5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140 not found: ID does not exist" containerID="5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.594992 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140"} err="failed to get container status \"5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140\": rpc error: code = NotFound desc = could not find container \"5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140\": container with ID starting with 5e767c2fa9e484101eb397b7898497bc11f6ab8a66f43a509f7f90c411c0a140 not found: ID does not exist" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.595030 4746 scope.go:117] "RemoveContainer" containerID="0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.603279 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-nxg77"] Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.611723 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-nxg77"] Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.631359 4746 scope.go:117] "RemoveContainer" containerID="0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.631510 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b"] Jan 03 03:35:05 crc kubenswrapper[4746]: E0103 03:35:05.632231 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67\": container with ID starting with 0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67 not found: ID does not exist" containerID="0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.632300 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67"} err="failed to get container status \"0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67\": rpc error: code = NotFound desc = could not find container \"0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67\": container with ID starting with 0443af6a83ad8f1cd77249484ac142335b81d91761f41b68979fd164343a6c67 not found: ID does not exist" Jan 03 03:35:05 crc kubenswrapper[4746]: I0103 03:35:05.639748 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-698bd85bf4-klp5b"] Jan 03 03:35:06 crc kubenswrapper[4746]: I0103 03:35:06.471440 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fc0444c-72a4-4172-ab52-4f24f214486d" path="/var/lib/kubelet/pods/6fc0444c-72a4-4172-ab52-4f24f214486d/volumes" Jan 03 03:35:06 crc kubenswrapper[4746]: I0103 03:35:06.472116 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94395e3d-0bbb-4c00-b181-51289c280e93" path="/var/lib/kubelet/pods/94395e3d-0bbb-4c00-b181-51289c280e93/volumes" Jan 03 03:35:06 crc kubenswrapper[4746]: I0103 03:35:06.472763 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b84bd0-171e-4129-b0d4-42a68cd8075b" path="/var/lib/kubelet/pods/b0b84bd0-171e-4129-b0d4-42a68cd8075b/volumes" Jan 03 03:35:06 crc kubenswrapper[4746]: I0103 03:35:06.473863 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c201de48-9fda-488a-9ca1-d6cb8cc085c5" path="/var/lib/kubelet/pods/c201de48-9fda-488a-9ca1-d6cb8cc085c5/volumes" Jan 03 03:35:06 crc kubenswrapper[4746]: I0103 03:35:06.626496 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp"] Jan 03 03:35:06 crc kubenswrapper[4746]: I0103 03:35:06.626707 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" podUID="61c330cc-8ee3-478a-8b0d-11170df356bf" containerName="operator" containerID="cri-o://c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c" gracePeriod=10 Jan 03 03:35:06 crc kubenswrapper[4746]: I0103 03:35:06.891871 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-v7lbc"] Jan 03 03:35:06 crc kubenswrapper[4746]: I0103 03:35:06.892320 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" podUID="2f26de35-b326-4263-9bb0-945d8ece35fb" containerName="registry-server" containerID="cri-o://04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58" gracePeriod=30 Jan 03 03:35:06 crc kubenswrapper[4746]: I0103 03:35:06.918065 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2"] Jan 03 03:35:06 crc kubenswrapper[4746]: I0103 03:35:06.924708 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590mxnr2"] Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.044747 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.203295 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46jtm\" (UniqueName: \"kubernetes.io/projected/61c330cc-8ee3-478a-8b0d-11170df356bf-kube-api-access-46jtm\") pod \"61c330cc-8ee3-478a-8b0d-11170df356bf\" (UID: \"61c330cc-8ee3-478a-8b0d-11170df356bf\") " Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.208012 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c330cc-8ee3-478a-8b0d-11170df356bf-kube-api-access-46jtm" (OuterVolumeSpecName: "kube-api-access-46jtm") pod "61c330cc-8ee3-478a-8b0d-11170df356bf" (UID: "61c330cc-8ee3-478a-8b0d-11170df356bf"). InnerVolumeSpecName "kube-api-access-46jtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.302206 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.304564 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8ftk\" (UniqueName: \"kubernetes.io/projected/2f26de35-b326-4263-9bb0-945d8ece35fb-kube-api-access-q8ftk\") pod \"2f26de35-b326-4263-9bb0-945d8ece35fb\" (UID: \"2f26de35-b326-4263-9bb0-945d8ece35fb\") " Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.304917 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46jtm\" (UniqueName: \"kubernetes.io/projected/61c330cc-8ee3-478a-8b0d-11170df356bf-kube-api-access-46jtm\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.308363 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f26de35-b326-4263-9bb0-945d8ece35fb-kube-api-access-q8ftk" (OuterVolumeSpecName: "kube-api-access-q8ftk") pod "2f26de35-b326-4263-9bb0-945d8ece35fb" (UID: "2f26de35-b326-4263-9bb0-945d8ece35fb"). InnerVolumeSpecName "kube-api-access-q8ftk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.406180 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8ftk\" (UniqueName: \"kubernetes.io/projected/2f26de35-b326-4263-9bb0-945d8ece35fb-kube-api-access-q8ftk\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.587392 4746 generic.go:334] "Generic (PLEG): container finished" podID="61c330cc-8ee3-478a-8b0d-11170df356bf" containerID="c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c" exitCode=0 Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.587450 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.587446 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" event={"ID":"61c330cc-8ee3-478a-8b0d-11170df356bf","Type":"ContainerDied","Data":"c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c"} Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.587512 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp" event={"ID":"61c330cc-8ee3-478a-8b0d-11170df356bf","Type":"ContainerDied","Data":"15d9f6e970b6e796b6ff209cc877f625ab9a9f96dcaa717996d38af3466cdcca"} Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.587535 4746 scope.go:117] "RemoveContainer" containerID="c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.589036 4746 generic.go:334] "Generic (PLEG): container finished" podID="2f26de35-b326-4263-9bb0-945d8ece35fb" containerID="04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58" exitCode=0 Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.589075 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" event={"ID":"2f26de35-b326-4263-9bb0-945d8ece35fb","Type":"ContainerDied","Data":"04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58"} Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.589093 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" event={"ID":"2f26de35-b326-4263-9bb0-945d8ece35fb","Type":"ContainerDied","Data":"b96c9c86d896802dd6ac21af459c298f806863c5e9936ffe262f3155b8b8886a"} Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.589155 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-v7lbc" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.607317 4746 scope.go:117] "RemoveContainer" containerID="c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c" Jan 03 03:35:07 crc kubenswrapper[4746]: E0103 03:35:07.608981 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c\": container with ID starting with c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c not found: ID does not exist" containerID="c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.609014 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c"} err="failed to get container status \"c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c\": rpc error: code = NotFound desc = could not find container \"c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c\": container with ID starting with c9e7838378fcd9ef16f38e75c09a981d8bffc6ee1dea92d30233ab89b6a2750c not found: ID does not exist" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.609032 4746 scope.go:117] "RemoveContainer" containerID="04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.619535 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp"] Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.627084 4746 scope.go:117] "RemoveContainer" containerID="04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.627201 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-vc5fp"] Jan 03 03:35:07 crc kubenswrapper[4746]: E0103 03:35:07.627621 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58\": container with ID starting with 04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58 not found: ID does not exist" containerID="04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.627673 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58"} err="failed to get container status \"04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58\": rpc error: code = NotFound desc = could not find container \"04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58\": container with ID starting with 04e27e3991f0a49031df39ea20f99def6eaedad9e98abd35e5ee480fd2ee5d58 not found: ID does not exist" Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.633285 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-v7lbc"] Jan 03 03:35:07 crc kubenswrapper[4746]: I0103 03:35:07.638648 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-v7lbc"] Jan 03 03:35:08 crc kubenswrapper[4746]: I0103 03:35:08.480451 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e89f99-dcd1-4eb2-9a47-412b5e25ffa6" path="/var/lib/kubelet/pods/03e89f99-dcd1-4eb2-9a47-412b5e25ffa6/volumes" Jan 03 03:35:08 crc kubenswrapper[4746]: I0103 03:35:08.481228 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f26de35-b326-4263-9bb0-945d8ece35fb" path="/var/lib/kubelet/pods/2f26de35-b326-4263-9bb0-945d8ece35fb/volumes" Jan 03 03:35:08 crc kubenswrapper[4746]: I0103 03:35:08.481619 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c330cc-8ee3-478a-8b0d-11170df356bf" path="/var/lib/kubelet/pods/61c330cc-8ee3-478a-8b0d-11170df356bf/volumes" Jan 03 03:35:12 crc kubenswrapper[4746]: I0103 03:35:12.473437 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8"] Jan 03 03:35:12 crc kubenswrapper[4746]: I0103 03:35:12.473964 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" podUID="f0dc4ece-d74b-462f-9f80-9b5e3b7abe22" containerName="manager" containerID="cri-o://5a1586d3c152a22244938286070f56ecfe1e54de4ded7b3df6c8682b79218676" gracePeriod=10 Jan 03 03:35:12 crc kubenswrapper[4746]: I0103 03:35:12.635439 4746 generic.go:334] "Generic (PLEG): container finished" podID="f0dc4ece-d74b-462f-9f80-9b5e3b7abe22" containerID="5a1586d3c152a22244938286070f56ecfe1e54de4ded7b3df6c8682b79218676" exitCode=0 Jan 03 03:35:12 crc kubenswrapper[4746]: I0103 03:35:12.635724 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" event={"ID":"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22","Type":"ContainerDied","Data":"5a1586d3c152a22244938286070f56ecfe1e54de4ded7b3df6c8682b79218676"} Jan 03 03:35:12 crc kubenswrapper[4746]: I0103 03:35:12.743157 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-nl4pn"] Jan 03 03:35:12 crc kubenswrapper[4746]: I0103 03:35:12.743404 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-nl4pn" podUID="5ae7d481-e881-447f-a645-f1fbb8acf420" containerName="registry-server" containerID="cri-o://195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84" gracePeriod=30 Jan 03 03:35:12 crc kubenswrapper[4746]: I0103 03:35:12.768688 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h"] Jan 03 03:35:12 crc kubenswrapper[4746]: I0103 03:35:12.778505 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/1baad43ee583354ff53cbb260c7a91ea237208417e1944aa5070b0779egvs2h"] Jan 03 03:35:12 crc kubenswrapper[4746]: I0103 03:35:12.956707 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.080278 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-webhook-cert\") pod \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\" (UID: \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\") " Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.080347 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fftn2\" (UniqueName: \"kubernetes.io/projected/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-kube-api-access-fftn2\") pod \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\" (UID: \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\") " Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.080386 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-apiservice-cert\") pod \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\" (UID: \"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22\") " Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.100694 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f0dc4ece-d74b-462f-9f80-9b5e3b7abe22" (UID: "f0dc4ece-d74b-462f-9f80-9b5e3b7abe22"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.104867 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "f0dc4ece-d74b-462f-9f80-9b5e3b7abe22" (UID: "f0dc4ece-d74b-462f-9f80-9b5e3b7abe22"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.110858 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-kube-api-access-fftn2" (OuterVolumeSpecName: "kube-api-access-fftn2") pod "f0dc4ece-d74b-462f-9f80-9b5e3b7abe22" (UID: "f0dc4ece-d74b-462f-9f80-9b5e3b7abe22"). InnerVolumeSpecName "kube-api-access-fftn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.183185 4746 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.183466 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fftn2\" (UniqueName: \"kubernetes.io/projected/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-kube-api-access-fftn2\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.183476 4746 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.195060 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-nl4pn" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.385980 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dmmz\" (UniqueName: \"kubernetes.io/projected/5ae7d481-e881-447f-a645-f1fbb8acf420-kube-api-access-4dmmz\") pod \"5ae7d481-e881-447f-a645-f1fbb8acf420\" (UID: \"5ae7d481-e881-447f-a645-f1fbb8acf420\") " Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.390040 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ae7d481-e881-447f-a645-f1fbb8acf420-kube-api-access-4dmmz" (OuterVolumeSpecName: "kube-api-access-4dmmz") pod "5ae7d481-e881-447f-a645-f1fbb8acf420" (UID: "5ae7d481-e881-447f-a645-f1fbb8acf420"). InnerVolumeSpecName "kube-api-access-4dmmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.488034 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dmmz\" (UniqueName: \"kubernetes.io/projected/5ae7d481-e881-447f-a645-f1fbb8acf420-kube-api-access-4dmmz\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.643573 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" event={"ID":"f0dc4ece-d74b-462f-9f80-9b5e3b7abe22","Type":"ContainerDied","Data":"0042a9c6f5ad9cb88eda85458bff1fc6e7b31d21f8ee2667af7cd13b049804b1"} Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.643619 4746 scope.go:117] "RemoveContainer" containerID="5a1586d3c152a22244938286070f56ecfe1e54de4ded7b3df6c8682b79218676" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.643867 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.645101 4746 generic.go:334] "Generic (PLEG): container finished" podID="5ae7d481-e881-447f-a645-f1fbb8acf420" containerID="195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84" exitCode=0 Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.645151 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-nl4pn" event={"ID":"5ae7d481-e881-447f-a645-f1fbb8acf420","Type":"ContainerDied","Data":"195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84"} Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.645180 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-nl4pn" event={"ID":"5ae7d481-e881-447f-a645-f1fbb8acf420","Type":"ContainerDied","Data":"c5e464366975673762b23ac94e5e8a775fe43bb9396bce3df6728b9886d1599c"} Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.645285 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-nl4pn" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.662560 4746 scope.go:117] "RemoveContainer" containerID="195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.690882 4746 scope.go:117] "RemoveContainer" containerID="195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84" Jan 03 03:35:13 crc kubenswrapper[4746]: E0103 03:35:13.691304 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84\": container with ID starting with 195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84 not found: ID does not exist" containerID="195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.691391 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84"} err="failed to get container status \"195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84\": rpc error: code = NotFound desc = could not find container \"195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84\": container with ID starting with 195b4c966c01254ccf2466076ffc74a26f283c01effe39e0285abab49bd79d84 not found: ID does not exist" Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.691484 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8"] Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.696639 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-74dd87d6d6-9nvt8"] Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.699822 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-nl4pn"] Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.702905 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-nl4pn"] Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.887360 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll"] Jan 03 03:35:13 crc kubenswrapper[4746]: I0103 03:35:13.887786 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" podUID="4a7583c4-ac1f-44c0-8c72-ac6a233d03e5" containerName="manager" containerID="cri-o://be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf" gracePeriod=10 Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.164813 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-k7f5z"] Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.165276 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-k7f5z" podUID="0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5" containerName="registry-server" containerID="cri-o://e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16" gracePeriod=30 Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.186543 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8"] Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.192807 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/6751dad4d352445d073b762c07dc21dd11b3083693934f548acd5fedb5xfjp8"] Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.380929 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.397627 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-apiservice-cert\") pod \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\" (UID: \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\") " Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.397766 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rrb6\" (UniqueName: \"kubernetes.io/projected/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-kube-api-access-4rrb6\") pod \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\" (UID: \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\") " Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.397802 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-webhook-cert\") pod \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\" (UID: \"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5\") " Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.402245 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "4a7583c4-ac1f-44c0-8c72-ac6a233d03e5" (UID: "4a7583c4-ac1f-44c0-8c72-ac6a233d03e5"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.402814 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-kube-api-access-4rrb6" (OuterVolumeSpecName: "kube-api-access-4rrb6") pod "4a7583c4-ac1f-44c0-8c72-ac6a233d03e5" (UID: "4a7583c4-ac1f-44c0-8c72-ac6a233d03e5"). InnerVolumeSpecName "kube-api-access-4rrb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.404021 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4a7583c4-ac1f-44c0-8c72-ac6a233d03e5" (UID: "4a7583c4-ac1f-44c0-8c72-ac6a233d03e5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.472402 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c3e120a-2e84-4fb8-b93d-44eaebb61650" path="/var/lib/kubelet/pods/1c3e120a-2e84-4fb8-b93d-44eaebb61650/volumes" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.473333 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f665b3-fc2a-41b5-8d80-9601a4af8271" path="/var/lib/kubelet/pods/33f665b3-fc2a-41b5-8d80-9601a4af8271/volumes" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.474173 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ae7d481-e881-447f-a645-f1fbb8acf420" path="/var/lib/kubelet/pods/5ae7d481-e881-447f-a645-f1fbb8acf420/volumes" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.475413 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0dc4ece-d74b-462f-9f80-9b5e3b7abe22" path="/var/lib/kubelet/pods/f0dc4ece-d74b-462f-9f80-9b5e3b7abe22/volumes" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.500902 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rrb6\" (UniqueName: \"kubernetes.io/projected/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-kube-api-access-4rrb6\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.500933 4746 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.500942 4746 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.507812 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-k7f5z" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.655175 4746 generic.go:334] "Generic (PLEG): container finished" podID="0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5" containerID="e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16" exitCode=0 Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.655241 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-k7f5z" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.655268 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-k7f5z" event={"ID":"0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5","Type":"ContainerDied","Data":"e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16"} Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.655311 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-k7f5z" event={"ID":"0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5","Type":"ContainerDied","Data":"e0d59f86a8836cbb86e04f908bdd9171f01bf72de21dfb8f75b5d7dae1db99e9"} Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.655329 4746 scope.go:117] "RemoveContainer" containerID="e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.659974 4746 generic.go:334] "Generic (PLEG): container finished" podID="4a7583c4-ac1f-44c0-8c72-ac6a233d03e5" containerID="be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf" exitCode=0 Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.660020 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" event={"ID":"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5","Type":"ContainerDied","Data":"be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf"} Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.660048 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" event={"ID":"4a7583c4-ac1f-44c0-8c72-ac6a233d03e5","Type":"ContainerDied","Data":"03681c96517316481818d049b048dd0f7632e07b0a2a5c67dd067d8079ddfd92"} Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.660090 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.679810 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll"] Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.684523 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-764d469f78-2w8ll"] Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.686134 4746 scope.go:117] "RemoveContainer" containerID="e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16" Jan 03 03:35:14 crc kubenswrapper[4746]: E0103 03:35:14.692286 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16\": container with ID starting with e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16 not found: ID does not exist" containerID="e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.692334 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16"} err="failed to get container status \"e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16\": rpc error: code = NotFound desc = could not find container \"e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16\": container with ID starting with e42fad1eecef2e8eb063893079079a88acc231763244c7a39706c92ceab16b16 not found: ID does not exist" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.692363 4746 scope.go:117] "RemoveContainer" containerID="be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.703334 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s85rm\" (UniqueName: \"kubernetes.io/projected/0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5-kube-api-access-s85rm\") pod \"0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5\" (UID: \"0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5\") " Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.706606 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5-kube-api-access-s85rm" (OuterVolumeSpecName: "kube-api-access-s85rm") pod "0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5" (UID: "0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5"). InnerVolumeSpecName "kube-api-access-s85rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.713077 4746 scope.go:117] "RemoveContainer" containerID="be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf" Jan 03 03:35:14 crc kubenswrapper[4746]: E0103 03:35:14.713692 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf\": container with ID starting with be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf not found: ID does not exist" containerID="be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.713753 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf"} err="failed to get container status \"be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf\": rpc error: code = NotFound desc = could not find container \"be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf\": container with ID starting with be959269565b6b7d690a8773f0e099e6060afb99ea44532ea457b6f6fda769cf not found: ID does not exist" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.805203 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s85rm\" (UniqueName: \"kubernetes.io/projected/0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5-kube-api-access-s85rm\") on node \"crc\" DevicePath \"\"" Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.981371 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-k7f5z"] Jan 03 03:35:14 crc kubenswrapper[4746]: I0103 03:35:14.985231 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-k7f5z"] Jan 03 03:35:16 crc kubenswrapper[4746]: I0103 03:35:16.472043 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5" path="/var/lib/kubelet/pods/0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5/volumes" Jan 03 03:35:16 crc kubenswrapper[4746]: I0103 03:35:16.473083 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a7583c4-ac1f-44c0-8c72-ac6a233d03e5" path="/var/lib/kubelet/pods/4a7583c4-ac1f-44c0-8c72-ac6a233d03e5/volumes" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.166312 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z8js4/must-gather-mrtdd"] Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167015 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c201de48-9fda-488a-9ca1-d6cb8cc085c5" containerName="mysql-bootstrap" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167026 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c201de48-9fda-488a-9ca1-d6cb8cc085c5" containerName="mysql-bootstrap" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167039 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67518beb-2963-4548-9d9c-967483f41b00" containerName="mariadb-account-delete" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167045 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="67518beb-2963-4548-9d9c-967483f41b00" containerName="mariadb-account-delete" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167052 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f024828-5253-4134-bd22-720212206aa3" containerName="keystone-api" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167058 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f024828-5253-4134-bd22-720212206aa3" containerName="keystone-api" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167066 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c201de48-9fda-488a-9ca1-d6cb8cc085c5" containerName="galera" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167071 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c201de48-9fda-488a-9ca1-d6cb8cc085c5" containerName="galera" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167079 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167084 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167091 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a7583c4-ac1f-44c0-8c72-ac6a233d03e5" containerName="manager" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167097 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a7583c4-ac1f-44c0-8c72-ac6a233d03e5" containerName="manager" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167105 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae7d481-e881-447f-a645-f1fbb8acf420" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167110 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae7d481-e881-447f-a645-f1fbb8acf420" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167126 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f26de35-b326-4263-9bb0-945d8ece35fb" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167141 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f26de35-b326-4263-9bb0-945d8ece35fb" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167149 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c330cc-8ee3-478a-8b0d-11170df356bf" containerName="operator" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167154 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c330cc-8ee3-478a-8b0d-11170df356bf" containerName="operator" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167165 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0dc4ece-d74b-462f-9f80-9b5e3b7abe22" containerName="manager" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167170 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0dc4ece-d74b-462f-9f80-9b5e3b7abe22" containerName="manager" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167180 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a58aeed-241f-4361-8570-043366a4a146" containerName="galera" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167186 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a58aeed-241f-4361-8570-043366a4a146" containerName="galera" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167195 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d592dd-baad-44d9-9fc0-3eab11cea0b4" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167201 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d592dd-baad-44d9-9fc0-3eab11cea0b4" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167211 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829fb3d2-d144-42b6-9e2c-493ae34fdf6a" containerName="memcached" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167216 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="829fb3d2-d144-42b6-9e2c-493ae34fdf6a" containerName="memcached" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167225 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1fcbd3b-57ff-4989-b0ef-19fe9df21d87" containerName="manager" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167232 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1fcbd3b-57ff-4989-b0ef-19fe9df21d87" containerName="manager" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167243 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc0444c-72a4-4172-ab52-4f24f214486d" containerName="manager" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167249 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc0444c-72a4-4172-ab52-4f24f214486d" containerName="manager" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167254 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67518beb-2963-4548-9d9c-967483f41b00" containerName="mariadb-account-delete" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167262 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="67518beb-2963-4548-9d9c-967483f41b00" containerName="mariadb-account-delete" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167269 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a58aeed-241f-4361-8570-043366a4a146" containerName="mysql-bootstrap" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167274 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a58aeed-241f-4361-8570-043366a4a146" containerName="mysql-bootstrap" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167282 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b84bd0-171e-4129-b0d4-42a68cd8075b" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167288 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b84bd0-171e-4129-b0d4-42a68cd8075b" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167298 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2615393c-ec92-4378-9eb7-4a5043a44bb6" containerName="setup-container" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167304 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2615393c-ec92-4378-9eb7-4a5043a44bb6" containerName="setup-container" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167314 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b854ff9-97ba-4cd2-9136-db9e311d5e94" containerName="galera" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167319 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b854ff9-97ba-4cd2-9136-db9e311d5e94" containerName="galera" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167329 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b854ff9-97ba-4cd2-9136-db9e311d5e94" containerName="mysql-bootstrap" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167335 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b854ff9-97ba-4cd2-9136-db9e311d5e94" containerName="mysql-bootstrap" Jan 03 03:35:28 crc kubenswrapper[4746]: E0103 03:35:28.167344 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2615393c-ec92-4378-9eb7-4a5043a44bb6" containerName="rabbitmq" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167349 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2615393c-ec92-4378-9eb7-4a5043a44bb6" containerName="rabbitmq" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167438 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f26de35-b326-4263-9bb0-945d8ece35fb" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167448 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c201de48-9fda-488a-9ca1-d6cb8cc085c5" containerName="galera" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167458 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="67518beb-2963-4548-9d9c-967483f41b00" containerName="mariadb-account-delete" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167465 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a7583c4-ac1f-44c0-8c72-ac6a233d03e5" containerName="manager" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167482 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a58aeed-241f-4361-8570-043366a4a146" containerName="galera" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167492 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c330cc-8ee3-478a-8b0d-11170df356bf" containerName="operator" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167500 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6d90d4-e4cf-40ec-876f-759bbcbdf7d5" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167509 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1fcbd3b-57ff-4989-b0ef-19fe9df21d87" containerName="manager" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167518 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="829fb3d2-d144-42b6-9e2c-493ae34fdf6a" containerName="memcached" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167524 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc0444c-72a4-4172-ab52-4f24f214486d" containerName="manager" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167533 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f024828-5253-4134-bd22-720212206aa3" containerName="keystone-api" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167540 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="35d592dd-baad-44d9-9fc0-3eab11cea0b4" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167547 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2615393c-ec92-4378-9eb7-4a5043a44bb6" containerName="rabbitmq" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167554 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="67518beb-2963-4548-9d9c-967483f41b00" containerName="mariadb-account-delete" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167562 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b84bd0-171e-4129-b0d4-42a68cd8075b" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167571 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b854ff9-97ba-4cd2-9136-db9e311d5e94" containerName="galera" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167578 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ae7d481-e881-447f-a645-f1fbb8acf420" containerName="registry-server" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.167585 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0dc4ece-d74b-462f-9f80-9b5e3b7abe22" containerName="manager" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.168206 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z8js4/must-gather-mrtdd" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.170332 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z8js4"/"kube-root-ca.crt" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.170972 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z8js4"/"openshift-service-ca.crt" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.192934 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z8js4/must-gather-mrtdd"] Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.267508 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb679aef-7b8e-4aac-a5fc-e973fdf37777-must-gather-output\") pod \"must-gather-mrtdd\" (UID: \"bb679aef-7b8e-4aac-a5fc-e973fdf37777\") " pod="openshift-must-gather-z8js4/must-gather-mrtdd" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.267693 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b84jb\" (UniqueName: \"kubernetes.io/projected/bb679aef-7b8e-4aac-a5fc-e973fdf37777-kube-api-access-b84jb\") pod \"must-gather-mrtdd\" (UID: \"bb679aef-7b8e-4aac-a5fc-e973fdf37777\") " pod="openshift-must-gather-z8js4/must-gather-mrtdd" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.369429 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b84jb\" (UniqueName: \"kubernetes.io/projected/bb679aef-7b8e-4aac-a5fc-e973fdf37777-kube-api-access-b84jb\") pod \"must-gather-mrtdd\" (UID: \"bb679aef-7b8e-4aac-a5fc-e973fdf37777\") " pod="openshift-must-gather-z8js4/must-gather-mrtdd" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.369501 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb679aef-7b8e-4aac-a5fc-e973fdf37777-must-gather-output\") pod \"must-gather-mrtdd\" (UID: \"bb679aef-7b8e-4aac-a5fc-e973fdf37777\") " pod="openshift-must-gather-z8js4/must-gather-mrtdd" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.370006 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb679aef-7b8e-4aac-a5fc-e973fdf37777-must-gather-output\") pod \"must-gather-mrtdd\" (UID: \"bb679aef-7b8e-4aac-a5fc-e973fdf37777\") " pod="openshift-must-gather-z8js4/must-gather-mrtdd" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.387837 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b84jb\" (UniqueName: \"kubernetes.io/projected/bb679aef-7b8e-4aac-a5fc-e973fdf37777-kube-api-access-b84jb\") pod \"must-gather-mrtdd\" (UID: \"bb679aef-7b8e-4aac-a5fc-e973fdf37777\") " pod="openshift-must-gather-z8js4/must-gather-mrtdd" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.484970 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z8js4/must-gather-mrtdd" Jan 03 03:35:28 crc kubenswrapper[4746]: I0103 03:35:28.929616 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z8js4/must-gather-mrtdd"] Jan 03 03:35:29 crc kubenswrapper[4746]: I0103 03:35:29.768874 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z8js4/must-gather-mrtdd" event={"ID":"bb679aef-7b8e-4aac-a5fc-e973fdf37777","Type":"ContainerStarted","Data":"7c0c0f99cb3d430661cf39247cb9dfa7b254b21c199004f06f2c8b56459b7837"} Jan 03 03:35:31 crc kubenswrapper[4746]: I0103 03:35:31.374674 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:35:31 crc kubenswrapper[4746]: I0103 03:35:31.375109 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:35:35 crc kubenswrapper[4746]: I0103 03:35:35.818184 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z8js4/must-gather-mrtdd" event={"ID":"bb679aef-7b8e-4aac-a5fc-e973fdf37777","Type":"ContainerStarted","Data":"27e5cc26afcd38ef3090385e29206f4779a3f4fc35212b49e17a97d48670446c"} Jan 03 03:35:36 crc kubenswrapper[4746]: I0103 03:35:36.827197 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z8js4/must-gather-mrtdd" event={"ID":"bb679aef-7b8e-4aac-a5fc-e973fdf37777","Type":"ContainerStarted","Data":"e32dce948d38ce72f8d647b371d7c16ea950e611dcbfe90afe40ecda9632ed99"} Jan 03 03:35:36 crc kubenswrapper[4746]: I0103 03:35:36.850783 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z8js4/must-gather-mrtdd" podStartSLOduration=2.345958296 podStartE2EDuration="8.850765818s" podCreationTimestamp="2026-01-03 03:35:28 +0000 UTC" firstStartedPulling="2026-01-03 03:35:28.94137039 +0000 UTC m=+1248.791260695" lastFinishedPulling="2026-01-03 03:35:35.446177902 +0000 UTC m=+1255.296068217" observedRunningTime="2026-01-03 03:35:36.844971406 +0000 UTC m=+1256.694861721" watchObservedRunningTime="2026-01-03 03:35:36.850765818 +0000 UTC m=+1256.700656123" Jan 03 03:35:41 crc kubenswrapper[4746]: I0103 03:35:41.237469 4746 scope.go:117] "RemoveContainer" containerID="2824f19a5478b2a9d7a6572dff9945f8aa19467a047aebcbace327c04ff61568" Jan 03 03:35:41 crc kubenswrapper[4746]: I0103 03:35:41.266021 4746 scope.go:117] "RemoveContainer" containerID="41729b43ba759c789f07d0d0fa28ab90f7879f0236a0ce5fa11925eeb9de8b78" Jan 03 03:35:41 crc kubenswrapper[4746]: I0103 03:35:41.286154 4746 scope.go:117] "RemoveContainer" containerID="07e405ee9a0a843825e01481469d9dd41597a3e2ea1195acce308fa806bd4870" Jan 03 03:36:01 crc kubenswrapper[4746]: I0103 03:36:01.373563 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:36:01 crc kubenswrapper[4746]: I0103 03:36:01.374094 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:36:17 crc kubenswrapper[4746]: I0103 03:36:17.881527 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bj7mx_5bebc5d9-35a7-4154-873e-65d60f85f9b6/control-plane-machine-set-operator/0.log" Jan 03 03:36:18 crc kubenswrapper[4746]: I0103 03:36:18.057562 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-d58zr_7a51c938-dfaf-4222-afb6-0cd79e445537/kube-rbac-proxy/0.log" Jan 03 03:36:18 crc kubenswrapper[4746]: I0103 03:36:18.099614 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-d58zr_7a51c938-dfaf-4222-afb6-0cd79e445537/machine-api-operator/0.log" Jan 03 03:36:31 crc kubenswrapper[4746]: I0103 03:36:31.373312 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:36:31 crc kubenswrapper[4746]: I0103 03:36:31.373672 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:36:31 crc kubenswrapper[4746]: I0103 03:36:31.373719 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:36:31 crc kubenswrapper[4746]: I0103 03:36:31.374315 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc8caa044361bfc56c6c01ce89f41b5d201cf998f1072eb46a6767a1effaf4ee"} pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 03:36:31 crc kubenswrapper[4746]: I0103 03:36:31.374362 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" containerID="cri-o://bc8caa044361bfc56c6c01ce89f41b5d201cf998f1072eb46a6767a1effaf4ee" gracePeriod=600 Jan 03 03:36:32 crc kubenswrapper[4746]: I0103 03:36:32.165556 4746 generic.go:334] "Generic (PLEG): container finished" podID="00b3b853-9953-4039-964d-841a01708848" containerID="bc8caa044361bfc56c6c01ce89f41b5d201cf998f1072eb46a6767a1effaf4ee" exitCode=0 Jan 03 03:36:32 crc kubenswrapper[4746]: I0103 03:36:32.165608 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerDied","Data":"bc8caa044361bfc56c6c01ce89f41b5d201cf998f1072eb46a6767a1effaf4ee"} Jan 03 03:36:32 crc kubenswrapper[4746]: I0103 03:36:32.166388 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerStarted","Data":"34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc"} Jan 03 03:36:32 crc kubenswrapper[4746]: I0103 03:36:32.166413 4746 scope.go:117] "RemoveContainer" containerID="eb6d369458d9ac55bbd1588092e61e42f348a71a898ff19ed28c8341fef5065e" Jan 03 03:36:33 crc kubenswrapper[4746]: I0103 03:36:33.782615 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-tz4v5_1ebc1074-93c2-408f-bad5-0392529562c7/controller/0.log" Jan 03 03:36:33 crc kubenswrapper[4746]: I0103 03:36:33.797305 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-tz4v5_1ebc1074-93c2-408f-bad5-0392529562c7/kube-rbac-proxy/0.log" Jan 03 03:36:33 crc kubenswrapper[4746]: I0103 03:36:33.954680 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-frr-files/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.153622 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-metrics/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.166055 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-reloader/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.173781 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-reloader/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.174697 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-frr-files/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.372214 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-frr-files/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.373319 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-reloader/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.394830 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-metrics/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.395673 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-metrics/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.611195 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-metrics/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.624400 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-reloader/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.632691 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-frr-files/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.662866 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/controller/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.788881 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/kube-rbac-proxy/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.794274 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/frr-metrics/0.log" Jan 03 03:36:34 crc kubenswrapper[4746]: I0103 03:36:34.840341 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/kube-rbac-proxy-frr/0.log" Jan 03 03:36:35 crc kubenswrapper[4746]: I0103 03:36:35.004912 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/reloader/0.log" Jan 03 03:36:35 crc kubenswrapper[4746]: I0103 03:36:35.097159 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-ztjlw_2883eb8b-d6db-4ede-bf40-cb8aee643105/frr-k8s-webhook-server/0.log" Jan 03 03:36:35 crc kubenswrapper[4746]: I0103 03:36:35.175381 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/frr/0.log" Jan 03 03:36:35 crc kubenswrapper[4746]: I0103 03:36:35.290362 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5bbcffc974-mlzdd_6cb321dd-38a5-424e-a99d-f6594f2aa06e/manager/0.log" Jan 03 03:36:35 crc kubenswrapper[4746]: I0103 03:36:35.382506 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5657bbd6cc-tnqph_bbecca3d-7406-432b-995f-9a7ef95f6c01/webhook-server/0.log" Jan 03 03:36:35 crc kubenswrapper[4746]: I0103 03:36:35.470620 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gsdzz_460c2eb9-1e8c-499c-871b-a4bcf6fe99a1/kube-rbac-proxy/0.log" Jan 03 03:36:35 crc kubenswrapper[4746]: I0103 03:36:35.553374 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gsdzz_460c2eb9-1e8c-499c-871b-a4bcf6fe99a1/speaker/0.log" Jan 03 03:36:41 crc kubenswrapper[4746]: I0103 03:36:41.597687 4746 scope.go:117] "RemoveContainer" containerID="031dc780099bf16bd57112039bd8b1bb2cc4d6de23290420a71b60df7eb266c2" Jan 03 03:36:41 crc kubenswrapper[4746]: I0103 03:36:41.622820 4746 scope.go:117] "RemoveContainer" containerID="4b008c7dc5acbf9c1310c864311ff69de9c3c9da11286142b311d846a20d6889" Jan 03 03:36:41 crc kubenswrapper[4746]: I0103 03:36:41.639591 4746 scope.go:117] "RemoveContainer" containerID="93a19824f292d0a0ffcb6285653829aa8d58a2daf6e5989893ec7768301d2540" Jan 03 03:36:41 crc kubenswrapper[4746]: I0103 03:36:41.664321 4746 scope.go:117] "RemoveContainer" containerID="de071812d24df951dff8d6d5838b50f980d010a41a74dfbeb12915581ac88aa8" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.077642 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/util/0.log" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.276126 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/pull/0.log" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.279986 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/pull/0.log" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.299121 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/util/0.log" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.433737 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/extract/0.log" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.448909 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/util/0.log" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.457670 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/pull/0.log" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.632047 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/extract-utilities/0.log" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.795722 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/extract-utilities/0.log" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.799185 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/extract-content/0.log" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.808194 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/extract-content/0.log" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.956470 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/extract-utilities/0.log" Jan 03 03:36:58 crc kubenswrapper[4746]: I0103 03:36:58.996119 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/extract-content/0.log" Jan 03 03:36:59 crc kubenswrapper[4746]: I0103 03:36:59.173102 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/registry-server/0.log" Jan 03 03:36:59 crc kubenswrapper[4746]: I0103 03:36:59.182567 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/extract-utilities/0.log" Jan 03 03:36:59 crc kubenswrapper[4746]: I0103 03:36:59.374216 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/extract-utilities/0.log" Jan 03 03:36:59 crc kubenswrapper[4746]: I0103 03:36:59.374901 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/extract-content/0.log" Jan 03 03:36:59 crc kubenswrapper[4746]: I0103 03:36:59.411817 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/extract-content/0.log" Jan 03 03:36:59 crc kubenswrapper[4746]: I0103 03:36:59.563005 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/extract-content/0.log" Jan 03 03:36:59 crc kubenswrapper[4746]: I0103 03:36:59.607053 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/extract-utilities/0.log" Jan 03 03:36:59 crc kubenswrapper[4746]: I0103 03:36:59.798010 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-t5z9t_66739b82-1665-4781-b791-5b1fa1807d88/marketplace-operator/0.log" Jan 03 03:36:59 crc kubenswrapper[4746]: I0103 03:36:59.822144 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/registry-server/0.log" Jan 03 03:36:59 crc kubenswrapper[4746]: I0103 03:36:59.873102 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/extract-utilities/0.log" Jan 03 03:37:00 crc kubenswrapper[4746]: I0103 03:37:00.047242 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/extract-content/0.log" Jan 03 03:37:00 crc kubenswrapper[4746]: I0103 03:37:00.063396 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/extract-content/0.log" Jan 03 03:37:00 crc kubenswrapper[4746]: I0103 03:37:00.069700 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/extract-utilities/0.log" Jan 03 03:37:00 crc kubenswrapper[4746]: I0103 03:37:00.231335 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/extract-utilities/0.log" Jan 03 03:37:00 crc kubenswrapper[4746]: I0103 03:37:00.283835 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/registry-server/0.log" Jan 03 03:37:00 crc kubenswrapper[4746]: I0103 03:37:00.287118 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/extract-content/0.log" Jan 03 03:37:00 crc kubenswrapper[4746]: I0103 03:37:00.417493 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/extract-utilities/0.log" Jan 03 03:37:00 crc kubenswrapper[4746]: I0103 03:37:00.597917 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/extract-utilities/0.log" Jan 03 03:37:00 crc kubenswrapper[4746]: I0103 03:37:00.637455 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/extract-content/0.log" Jan 03 03:37:00 crc kubenswrapper[4746]: I0103 03:37:00.638492 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/extract-content/0.log" Jan 03 03:37:00 crc kubenswrapper[4746]: I0103 03:37:00.784486 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/extract-content/0.log" Jan 03 03:37:00 crc kubenswrapper[4746]: I0103 03:37:00.789499 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/extract-utilities/0.log" Jan 03 03:37:01 crc kubenswrapper[4746]: I0103 03:37:01.141902 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/registry-server/0.log" Jan 03 03:37:41 crc kubenswrapper[4746]: I0103 03:37:41.711470 4746 scope.go:117] "RemoveContainer" containerID="4dc8b346ad00c70fe7eba5dddfdaa8e986bc08df16b09d10f2357b5f4cd7287e" Jan 03 03:37:41 crc kubenswrapper[4746]: I0103 03:37:41.734042 4746 scope.go:117] "RemoveContainer" containerID="6705f27afb135cb7e409bc3fe98b5791a7a410b5e88eaddecc1365eb94c26b18" Jan 03 03:37:41 crc kubenswrapper[4746]: I0103 03:37:41.761451 4746 scope.go:117] "RemoveContainer" containerID="1a982ac140c728ca35d38ef8d58de515175fa0997736c21cc29e22ccd2f89f9b" Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.533760 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ndgzp"] Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.539244 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.553329 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndgzp"] Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.609480 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-utilities\") pod \"redhat-marketplace-ndgzp\" (UID: \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\") " pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.609538 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-catalog-content\") pod \"redhat-marketplace-ndgzp\" (UID: \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\") " pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.609577 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx8k4\" (UniqueName: \"kubernetes.io/projected/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-kube-api-access-rx8k4\") pod \"redhat-marketplace-ndgzp\" (UID: \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\") " pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.710966 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-utilities\") pod \"redhat-marketplace-ndgzp\" (UID: \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\") " pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.711004 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-catalog-content\") pod \"redhat-marketplace-ndgzp\" (UID: \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\") " pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.711028 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx8k4\" (UniqueName: \"kubernetes.io/projected/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-kube-api-access-rx8k4\") pod \"redhat-marketplace-ndgzp\" (UID: \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\") " pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.711644 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-utilities\") pod \"redhat-marketplace-ndgzp\" (UID: \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\") " pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.711883 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-catalog-content\") pod \"redhat-marketplace-ndgzp\" (UID: \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\") " pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.727975 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx8k4\" (UniqueName: \"kubernetes.io/projected/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-kube-api-access-rx8k4\") pod \"redhat-marketplace-ndgzp\" (UID: \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\") " pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:37:58 crc kubenswrapper[4746]: I0103 03:37:58.863697 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:37:59 crc kubenswrapper[4746]: I0103 03:37:59.112744 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndgzp"] Jan 03 03:37:59 crc kubenswrapper[4746]: I0103 03:37:59.728249 4746 generic.go:334] "Generic (PLEG): container finished" podID="9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" containerID="b4aa116c9064e522371bb49d4ff560638854dc972ecd7a6e6492b3b8bb7b8957" exitCode=0 Jan 03 03:37:59 crc kubenswrapper[4746]: I0103 03:37:59.728293 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndgzp" event={"ID":"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7","Type":"ContainerDied","Data":"b4aa116c9064e522371bb49d4ff560638854dc972ecd7a6e6492b3b8bb7b8957"} Jan 03 03:37:59 crc kubenswrapper[4746]: I0103 03:37:59.728320 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndgzp" event={"ID":"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7","Type":"ContainerStarted","Data":"252ec68ec86fa2834ab2f0597bf8c0bdd5124d7c119562029fecf05a152d2a68"} Jan 03 03:37:59 crc kubenswrapper[4746]: I0103 03:37:59.730631 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 03 03:38:00 crc kubenswrapper[4746]: I0103 03:38:00.738415 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndgzp" event={"ID":"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7","Type":"ContainerStarted","Data":"b58e3344fd67b9b846446fd99d77765341d51498805c48949df885bf8dd98ced"} Jan 03 03:38:01 crc kubenswrapper[4746]: I0103 03:38:01.750927 4746 generic.go:334] "Generic (PLEG): container finished" podID="9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" containerID="b58e3344fd67b9b846446fd99d77765341d51498805c48949df885bf8dd98ced" exitCode=0 Jan 03 03:38:01 crc kubenswrapper[4746]: I0103 03:38:01.751089 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndgzp" event={"ID":"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7","Type":"ContainerDied","Data":"b58e3344fd67b9b846446fd99d77765341d51498805c48949df885bf8dd98ced"} Jan 03 03:38:02 crc kubenswrapper[4746]: I0103 03:38:02.765054 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndgzp" event={"ID":"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7","Type":"ContainerStarted","Data":"3f2fdf41ff13df72bc31685deaa28d9ba93679caab2c3b84de66cefc73a6a12d"} Jan 03 03:38:02 crc kubenswrapper[4746]: I0103 03:38:02.798323 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ndgzp" podStartSLOduration=2.268422605 podStartE2EDuration="4.798303771s" podCreationTimestamp="2026-01-03 03:37:58 +0000 UTC" firstStartedPulling="2026-01-03 03:37:59.730352809 +0000 UTC m=+1399.580243134" lastFinishedPulling="2026-01-03 03:38:02.260234005 +0000 UTC m=+1402.110124300" observedRunningTime="2026-01-03 03:38:02.791823192 +0000 UTC m=+1402.641713507" watchObservedRunningTime="2026-01-03 03:38:02.798303771 +0000 UTC m=+1402.648194086" Jan 03 03:38:08 crc kubenswrapper[4746]: I0103 03:38:08.864229 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:38:08 crc kubenswrapper[4746]: I0103 03:38:08.864914 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:38:08 crc kubenswrapper[4746]: I0103 03:38:08.912205 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:38:09 crc kubenswrapper[4746]: I0103 03:38:09.899645 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:38:10 crc kubenswrapper[4746]: I0103 03:38:10.354999 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndgzp"] Jan 03 03:38:11 crc kubenswrapper[4746]: I0103 03:38:11.832883 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ndgzp" podUID="9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" containerName="registry-server" containerID="cri-o://3f2fdf41ff13df72bc31685deaa28d9ba93679caab2c3b84de66cefc73a6a12d" gracePeriod=2 Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.153802 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-55vk2"] Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.155183 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.171158 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-55vk2"] Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.215183 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33178598-f340-45dd-8b24-a0e4a39b6c1e-catalog-content\") pod \"certified-operators-55vk2\" (UID: \"33178598-f340-45dd-8b24-a0e4a39b6c1e\") " pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.215406 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33178598-f340-45dd-8b24-a0e4a39b6c1e-utilities\") pod \"certified-operators-55vk2\" (UID: \"33178598-f340-45dd-8b24-a0e4a39b6c1e\") " pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.215529 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b7zf\" (UniqueName: \"kubernetes.io/projected/33178598-f340-45dd-8b24-a0e4a39b6c1e-kube-api-access-9b7zf\") pod \"certified-operators-55vk2\" (UID: \"33178598-f340-45dd-8b24-a0e4a39b6c1e\") " pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.317170 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b7zf\" (UniqueName: \"kubernetes.io/projected/33178598-f340-45dd-8b24-a0e4a39b6c1e-kube-api-access-9b7zf\") pod \"certified-operators-55vk2\" (UID: \"33178598-f340-45dd-8b24-a0e4a39b6c1e\") " pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.317226 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33178598-f340-45dd-8b24-a0e4a39b6c1e-catalog-content\") pod \"certified-operators-55vk2\" (UID: \"33178598-f340-45dd-8b24-a0e4a39b6c1e\") " pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.317267 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33178598-f340-45dd-8b24-a0e4a39b6c1e-utilities\") pod \"certified-operators-55vk2\" (UID: \"33178598-f340-45dd-8b24-a0e4a39b6c1e\") " pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.317710 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33178598-f340-45dd-8b24-a0e4a39b6c1e-utilities\") pod \"certified-operators-55vk2\" (UID: \"33178598-f340-45dd-8b24-a0e4a39b6c1e\") " pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.317841 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33178598-f340-45dd-8b24-a0e4a39b6c1e-catalog-content\") pod \"certified-operators-55vk2\" (UID: \"33178598-f340-45dd-8b24-a0e4a39b6c1e\") " pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.348416 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b7zf\" (UniqueName: \"kubernetes.io/projected/33178598-f340-45dd-8b24-a0e4a39b6c1e-kube-api-access-9b7zf\") pod \"certified-operators-55vk2\" (UID: \"33178598-f340-45dd-8b24-a0e4a39b6c1e\") " pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.499923 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.775958 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-55vk2"] Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.840959 4746 generic.go:334] "Generic (PLEG): container finished" podID="9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" containerID="3f2fdf41ff13df72bc31685deaa28d9ba93679caab2c3b84de66cefc73a6a12d" exitCode=0 Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.841451 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndgzp" event={"ID":"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7","Type":"ContainerDied","Data":"3f2fdf41ff13df72bc31685deaa28d9ba93679caab2c3b84de66cefc73a6a12d"} Jan 03 03:38:12 crc kubenswrapper[4746]: I0103 03:38:12.842328 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55vk2" event={"ID":"33178598-f340-45dd-8b24-a0e4a39b6c1e","Type":"ContainerStarted","Data":"eaccd570af175ea274b2bb0fea665971facb0f19966d1c2d4eadeb2702f3e53e"} Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.295900 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.329166 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-catalog-content\") pod \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\" (UID: \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\") " Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.329212 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx8k4\" (UniqueName: \"kubernetes.io/projected/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-kube-api-access-rx8k4\") pod \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\" (UID: \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\") " Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.329288 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-utilities\") pod \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\" (UID: \"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7\") " Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.330895 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-utilities" (OuterVolumeSpecName: "utilities") pod "9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" (UID: "9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.340099 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-kube-api-access-rx8k4" (OuterVolumeSpecName: "kube-api-access-rx8k4") pod "9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" (UID: "9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7"). InnerVolumeSpecName "kube-api-access-rx8k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.371419 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" (UID: "9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.430814 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.430877 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx8k4\" (UniqueName: \"kubernetes.io/projected/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-kube-api-access-rx8k4\") on node \"crc\" DevicePath \"\"" Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.430893 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.854756 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ndgzp" event={"ID":"9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7","Type":"ContainerDied","Data":"252ec68ec86fa2834ab2f0597bf8c0bdd5124d7c119562029fecf05a152d2a68"} Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.855226 4746 scope.go:117] "RemoveContainer" containerID="3f2fdf41ff13df72bc31685deaa28d9ba93679caab2c3b84de66cefc73a6a12d" Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.854788 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ndgzp" Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.858501 4746 generic.go:334] "Generic (PLEG): container finished" podID="33178598-f340-45dd-8b24-a0e4a39b6c1e" containerID="d4390375188add31f295570300f26d9b67b81769e679bbb76557ca41035facb6" exitCode=0 Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.858554 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55vk2" event={"ID":"33178598-f340-45dd-8b24-a0e4a39b6c1e","Type":"ContainerDied","Data":"d4390375188add31f295570300f26d9b67b81769e679bbb76557ca41035facb6"} Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.891635 4746 scope.go:117] "RemoveContainer" containerID="b58e3344fd67b9b846446fd99d77765341d51498805c48949df885bf8dd98ced" Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.917470 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndgzp"] Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.920856 4746 scope.go:117] "RemoveContainer" containerID="b4aa116c9064e522371bb49d4ff560638854dc972ecd7a6e6492b3b8bb7b8957" Jan 03 03:38:13 crc kubenswrapper[4746]: I0103 03:38:13.925863 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ndgzp"] Jan 03 03:38:14 crc kubenswrapper[4746]: I0103 03:38:14.476833 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" path="/var/lib/kubelet/pods/9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7/volumes" Jan 03 03:38:14 crc kubenswrapper[4746]: I0103 03:38:14.868975 4746 generic.go:334] "Generic (PLEG): container finished" podID="bb679aef-7b8e-4aac-a5fc-e973fdf37777" containerID="27e5cc26afcd38ef3090385e29206f4779a3f4fc35212b49e17a97d48670446c" exitCode=0 Jan 03 03:38:14 crc kubenswrapper[4746]: I0103 03:38:14.869110 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z8js4/must-gather-mrtdd" event={"ID":"bb679aef-7b8e-4aac-a5fc-e973fdf37777","Type":"ContainerDied","Data":"27e5cc26afcd38ef3090385e29206f4779a3f4fc35212b49e17a97d48670446c"} Jan 03 03:38:14 crc kubenswrapper[4746]: I0103 03:38:14.870451 4746 scope.go:117] "RemoveContainer" containerID="27e5cc26afcd38ef3090385e29206f4779a3f4fc35212b49e17a97d48670446c" Jan 03 03:38:14 crc kubenswrapper[4746]: I0103 03:38:14.873105 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55vk2" event={"ID":"33178598-f340-45dd-8b24-a0e4a39b6c1e","Type":"ContainerStarted","Data":"288aa126e0263010edb5dd532e358a596df77de68ce061979cacb20b0d0b5a4f"} Jan 03 03:38:15 crc kubenswrapper[4746]: I0103 03:38:15.847739 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z8js4_must-gather-mrtdd_bb679aef-7b8e-4aac-a5fc-e973fdf37777/gather/0.log" Jan 03 03:38:15 crc kubenswrapper[4746]: I0103 03:38:15.883878 4746 generic.go:334] "Generic (PLEG): container finished" podID="33178598-f340-45dd-8b24-a0e4a39b6c1e" containerID="288aa126e0263010edb5dd532e358a596df77de68ce061979cacb20b0d0b5a4f" exitCode=0 Jan 03 03:38:15 crc kubenswrapper[4746]: I0103 03:38:15.883973 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55vk2" event={"ID":"33178598-f340-45dd-8b24-a0e4a39b6c1e","Type":"ContainerDied","Data":"288aa126e0263010edb5dd532e358a596df77de68ce061979cacb20b0d0b5a4f"} Jan 03 03:38:16 crc kubenswrapper[4746]: I0103 03:38:16.903556 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55vk2" event={"ID":"33178598-f340-45dd-8b24-a0e4a39b6c1e","Type":"ContainerStarted","Data":"315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1"} Jan 03 03:38:16 crc kubenswrapper[4746]: I0103 03:38:16.930319 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-55vk2" podStartSLOduration=2.439402383 podStartE2EDuration="4.930299574s" podCreationTimestamp="2026-01-03 03:38:12 +0000 UTC" firstStartedPulling="2026-01-03 03:38:13.86064158 +0000 UTC m=+1413.710531915" lastFinishedPulling="2026-01-03 03:38:16.351538801 +0000 UTC m=+1416.201429106" observedRunningTime="2026-01-03 03:38:16.928579762 +0000 UTC m=+1416.778470067" watchObservedRunningTime="2026-01-03 03:38:16.930299574 +0000 UTC m=+1416.780189889" Jan 03 03:38:22 crc kubenswrapper[4746]: I0103 03:38:22.501377 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:22 crc kubenswrapper[4746]: I0103 03:38:22.502472 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:22 crc kubenswrapper[4746]: I0103 03:38:22.556834 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:22 crc kubenswrapper[4746]: I0103 03:38:22.809789 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z8js4/must-gather-mrtdd"] Jan 03 03:38:22 crc kubenswrapper[4746]: I0103 03:38:22.810126 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-z8js4/must-gather-mrtdd" podUID="bb679aef-7b8e-4aac-a5fc-e973fdf37777" containerName="copy" containerID="cri-o://e32dce948d38ce72f8d647b371d7c16ea950e611dcbfe90afe40ecda9632ed99" gracePeriod=2 Jan 03 03:38:22 crc kubenswrapper[4746]: I0103 03:38:22.837443 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z8js4/must-gather-mrtdd"] Jan 03 03:38:22 crc kubenswrapper[4746]: I0103 03:38:22.946983 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z8js4_must-gather-mrtdd_bb679aef-7b8e-4aac-a5fc-e973fdf37777/copy/0.log" Jan 03 03:38:22 crc kubenswrapper[4746]: I0103 03:38:22.947589 4746 generic.go:334] "Generic (PLEG): container finished" podID="bb679aef-7b8e-4aac-a5fc-e973fdf37777" containerID="e32dce948d38ce72f8d647b371d7c16ea950e611dcbfe90afe40ecda9632ed99" exitCode=143 Jan 03 03:38:22 crc kubenswrapper[4746]: I0103 03:38:22.991297 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.041898 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-55vk2"] Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.171787 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z8js4_must-gather-mrtdd_bb679aef-7b8e-4aac-a5fc-e973fdf37777/copy/0.log" Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.172539 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z8js4/must-gather-mrtdd" Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.279906 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b84jb\" (UniqueName: \"kubernetes.io/projected/bb679aef-7b8e-4aac-a5fc-e973fdf37777-kube-api-access-b84jb\") pod \"bb679aef-7b8e-4aac-a5fc-e973fdf37777\" (UID: \"bb679aef-7b8e-4aac-a5fc-e973fdf37777\") " Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.280007 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb679aef-7b8e-4aac-a5fc-e973fdf37777-must-gather-output\") pod \"bb679aef-7b8e-4aac-a5fc-e973fdf37777\" (UID: \"bb679aef-7b8e-4aac-a5fc-e973fdf37777\") " Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.287491 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb679aef-7b8e-4aac-a5fc-e973fdf37777-kube-api-access-b84jb" (OuterVolumeSpecName: "kube-api-access-b84jb") pod "bb679aef-7b8e-4aac-a5fc-e973fdf37777" (UID: "bb679aef-7b8e-4aac-a5fc-e973fdf37777"). InnerVolumeSpecName "kube-api-access-b84jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.335072 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb679aef-7b8e-4aac-a5fc-e973fdf37777-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bb679aef-7b8e-4aac-a5fc-e973fdf37777" (UID: "bb679aef-7b8e-4aac-a5fc-e973fdf37777"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.381298 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b84jb\" (UniqueName: \"kubernetes.io/projected/bb679aef-7b8e-4aac-a5fc-e973fdf37777-kube-api-access-b84jb\") on node \"crc\" DevicePath \"\"" Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.381334 4746 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bb679aef-7b8e-4aac-a5fc-e973fdf37777-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.955439 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z8js4_must-gather-mrtdd_bb679aef-7b8e-4aac-a5fc-e973fdf37777/copy/0.log" Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.956148 4746 scope.go:117] "RemoveContainer" containerID="e32dce948d38ce72f8d647b371d7c16ea950e611dcbfe90afe40ecda9632ed99" Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.956158 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z8js4/must-gather-mrtdd" Jan 03 03:38:23 crc kubenswrapper[4746]: I0103 03:38:23.973018 4746 scope.go:117] "RemoveContainer" containerID="27e5cc26afcd38ef3090385e29206f4779a3f4fc35212b49e17a97d48670446c" Jan 03 03:38:24 crc kubenswrapper[4746]: I0103 03:38:24.472957 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb679aef-7b8e-4aac-a5fc-e973fdf37777" path="/var/lib/kubelet/pods/bb679aef-7b8e-4aac-a5fc-e973fdf37777/volumes" Jan 03 03:38:24 crc kubenswrapper[4746]: I0103 03:38:24.964367 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-55vk2" podUID="33178598-f340-45dd-8b24-a0e4a39b6c1e" containerName="registry-server" containerID="cri-o://315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1" gracePeriod=2 Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.331258 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.414007 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b7zf\" (UniqueName: \"kubernetes.io/projected/33178598-f340-45dd-8b24-a0e4a39b6c1e-kube-api-access-9b7zf\") pod \"33178598-f340-45dd-8b24-a0e4a39b6c1e\" (UID: \"33178598-f340-45dd-8b24-a0e4a39b6c1e\") " Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.414164 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33178598-f340-45dd-8b24-a0e4a39b6c1e-utilities\") pod \"33178598-f340-45dd-8b24-a0e4a39b6c1e\" (UID: \"33178598-f340-45dd-8b24-a0e4a39b6c1e\") " Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.414232 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33178598-f340-45dd-8b24-a0e4a39b6c1e-catalog-content\") pod \"33178598-f340-45dd-8b24-a0e4a39b6c1e\" (UID: \"33178598-f340-45dd-8b24-a0e4a39b6c1e\") " Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.416059 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33178598-f340-45dd-8b24-a0e4a39b6c1e-utilities" (OuterVolumeSpecName: "utilities") pod "33178598-f340-45dd-8b24-a0e4a39b6c1e" (UID: "33178598-f340-45dd-8b24-a0e4a39b6c1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.423280 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33178598-f340-45dd-8b24-a0e4a39b6c1e-kube-api-access-9b7zf" (OuterVolumeSpecName: "kube-api-access-9b7zf") pod "33178598-f340-45dd-8b24-a0e4a39b6c1e" (UID: "33178598-f340-45dd-8b24-a0e4a39b6c1e"). InnerVolumeSpecName "kube-api-access-9b7zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.515601 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b7zf\" (UniqueName: \"kubernetes.io/projected/33178598-f340-45dd-8b24-a0e4a39b6c1e-kube-api-access-9b7zf\") on node \"crc\" DevicePath \"\"" Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.515685 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33178598-f340-45dd-8b24-a0e4a39b6c1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.672325 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33178598-f340-45dd-8b24-a0e4a39b6c1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33178598-f340-45dd-8b24-a0e4a39b6c1e" (UID: "33178598-f340-45dd-8b24-a0e4a39b6c1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.717825 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33178598-f340-45dd-8b24-a0e4a39b6c1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.972335 4746 generic.go:334] "Generic (PLEG): container finished" podID="33178598-f340-45dd-8b24-a0e4a39b6c1e" containerID="315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1" exitCode=0 Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.972399 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55vk2" event={"ID":"33178598-f340-45dd-8b24-a0e4a39b6c1e","Type":"ContainerDied","Data":"315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1"} Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.972448 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-55vk2" event={"ID":"33178598-f340-45dd-8b24-a0e4a39b6c1e","Type":"ContainerDied","Data":"eaccd570af175ea274b2bb0fea665971facb0f19966d1c2d4eadeb2702f3e53e"} Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.972459 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-55vk2" Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.972472 4746 scope.go:117] "RemoveContainer" containerID="315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1" Jan 03 03:38:25 crc kubenswrapper[4746]: I0103 03:38:25.992378 4746 scope.go:117] "RemoveContainer" containerID="288aa126e0263010edb5dd532e358a596df77de68ce061979cacb20b0d0b5a4f" Jan 03 03:38:26 crc kubenswrapper[4746]: I0103 03:38:26.015902 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-55vk2"] Jan 03 03:38:26 crc kubenswrapper[4746]: I0103 03:38:26.021773 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-55vk2"] Jan 03 03:38:26 crc kubenswrapper[4746]: I0103 03:38:26.023097 4746 scope.go:117] "RemoveContainer" containerID="d4390375188add31f295570300f26d9b67b81769e679bbb76557ca41035facb6" Jan 03 03:38:26 crc kubenswrapper[4746]: I0103 03:38:26.051362 4746 scope.go:117] "RemoveContainer" containerID="315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1" Jan 03 03:38:26 crc kubenswrapper[4746]: E0103 03:38:26.051828 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1\": container with ID starting with 315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1 not found: ID does not exist" containerID="315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1" Jan 03 03:38:26 crc kubenswrapper[4746]: I0103 03:38:26.051889 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1"} err="failed to get container status \"315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1\": rpc error: code = NotFound desc = could not find container \"315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1\": container with ID starting with 315d5d43b93612042c25843a1af93f4f35a10a4db074652e66dd0b88fed8fbf1 not found: ID does not exist" Jan 03 03:38:26 crc kubenswrapper[4746]: I0103 03:38:26.051918 4746 scope.go:117] "RemoveContainer" containerID="288aa126e0263010edb5dd532e358a596df77de68ce061979cacb20b0d0b5a4f" Jan 03 03:38:26 crc kubenswrapper[4746]: E0103 03:38:26.052208 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288aa126e0263010edb5dd532e358a596df77de68ce061979cacb20b0d0b5a4f\": container with ID starting with 288aa126e0263010edb5dd532e358a596df77de68ce061979cacb20b0d0b5a4f not found: ID does not exist" containerID="288aa126e0263010edb5dd532e358a596df77de68ce061979cacb20b0d0b5a4f" Jan 03 03:38:26 crc kubenswrapper[4746]: I0103 03:38:26.052250 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288aa126e0263010edb5dd532e358a596df77de68ce061979cacb20b0d0b5a4f"} err="failed to get container status \"288aa126e0263010edb5dd532e358a596df77de68ce061979cacb20b0d0b5a4f\": rpc error: code = NotFound desc = could not find container \"288aa126e0263010edb5dd532e358a596df77de68ce061979cacb20b0d0b5a4f\": container with ID starting with 288aa126e0263010edb5dd532e358a596df77de68ce061979cacb20b0d0b5a4f not found: ID does not exist" Jan 03 03:38:26 crc kubenswrapper[4746]: I0103 03:38:26.052280 4746 scope.go:117] "RemoveContainer" containerID="d4390375188add31f295570300f26d9b67b81769e679bbb76557ca41035facb6" Jan 03 03:38:26 crc kubenswrapper[4746]: E0103 03:38:26.052550 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4390375188add31f295570300f26d9b67b81769e679bbb76557ca41035facb6\": container with ID starting with d4390375188add31f295570300f26d9b67b81769e679bbb76557ca41035facb6 not found: ID does not exist" containerID="d4390375188add31f295570300f26d9b67b81769e679bbb76557ca41035facb6" Jan 03 03:38:26 crc kubenswrapper[4746]: I0103 03:38:26.052581 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4390375188add31f295570300f26d9b67b81769e679bbb76557ca41035facb6"} err="failed to get container status \"d4390375188add31f295570300f26d9b67b81769e679bbb76557ca41035facb6\": rpc error: code = NotFound desc = could not find container \"d4390375188add31f295570300f26d9b67b81769e679bbb76557ca41035facb6\": container with ID starting with d4390375188add31f295570300f26d9b67b81769e679bbb76557ca41035facb6 not found: ID does not exist" Jan 03 03:38:26 crc kubenswrapper[4746]: I0103 03:38:26.472074 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33178598-f340-45dd-8b24-a0e4a39b6c1e" path="/var/lib/kubelet/pods/33178598-f340-45dd-8b24-a0e4a39b6c1e/volumes" Jan 03 03:38:31 crc kubenswrapper[4746]: I0103 03:38:31.373640 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:38:31 crc kubenswrapper[4746]: I0103 03:38:31.374174 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:38:41 crc kubenswrapper[4746]: I0103 03:38:41.812431 4746 scope.go:117] "RemoveContainer" containerID="2eded92e414b1dc1538a1e152a7be2606b8569a429298d9492ccfe79da04f679" Jan 03 03:38:41 crc kubenswrapper[4746]: I0103 03:38:41.839649 4746 scope.go:117] "RemoveContainer" containerID="f59319b371d31369c6beaa6b395565265a4223cc773602385bee9bbef5ea8e65" Jan 03 03:38:41 crc kubenswrapper[4746]: I0103 03:38:41.860966 4746 scope.go:117] "RemoveContainer" containerID="16634cfc4c21204c83d4490d068f78151322b54b426b1c5b465e3f92840d9ee6" Jan 03 03:39:01 crc kubenswrapper[4746]: I0103 03:39:01.373942 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:39:01 crc kubenswrapper[4746]: I0103 03:39:01.375167 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:39:31 crc kubenswrapper[4746]: I0103 03:39:31.374055 4746 patch_prober.go:28] interesting pod/machine-config-daemon-8lt5d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 03 03:39:31 crc kubenswrapper[4746]: I0103 03:39:31.375608 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 03 03:39:31 crc kubenswrapper[4746]: I0103 03:39:31.375738 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" Jan 03 03:39:31 crc kubenswrapper[4746]: I0103 03:39:31.376383 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc"} pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 03 03:39:31 crc kubenswrapper[4746]: I0103 03:39:31.376511 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" containerName="machine-config-daemon" containerID="cri-o://34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" gracePeriod=600 Jan 03 03:39:31 crc kubenswrapper[4746]: E0103 03:39:31.516277 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:39:32 crc kubenswrapper[4746]: I0103 03:39:32.442622 4746 generic.go:334] "Generic (PLEG): container finished" podID="00b3b853-9953-4039-964d-841a01708848" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" exitCode=0 Jan 03 03:39:32 crc kubenswrapper[4746]: I0103 03:39:32.442695 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerDied","Data":"34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc"} Jan 03 03:39:32 crc kubenswrapper[4746]: I0103 03:39:32.443010 4746 scope.go:117] "RemoveContainer" containerID="bc8caa044361bfc56c6c01ce89f41b5d201cf998f1072eb46a6767a1effaf4ee" Jan 03 03:39:32 crc kubenswrapper[4746]: I0103 03:39:32.443835 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:39:32 crc kubenswrapper[4746]: E0103 03:39:32.444187 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:39:41 crc kubenswrapper[4746]: I0103 03:39:41.952508 4746 scope.go:117] "RemoveContainer" containerID="b0d8afb23629647a4d2c1532dc37a6347eaca4e7cd09a6089edcb40aed085976" Jan 03 03:39:42 crc kubenswrapper[4746]: I0103 03:39:42.003451 4746 scope.go:117] "RemoveContainer" containerID="6f914e617ac67949fde0fdc4b754492a875edb036f78bb63cdda646cdd3543b2" Jan 03 03:39:42 crc kubenswrapper[4746]: I0103 03:39:42.029550 4746 scope.go:117] "RemoveContainer" containerID="9a849bbf850ecff68c4860a7c9e91355b95525fc93a31abea057f68450b8e567" Jan 03 03:39:42 crc kubenswrapper[4746]: I0103 03:39:42.047754 4746 scope.go:117] "RemoveContainer" containerID="4f0f540e64bdcddc60f6423ace6903c848e5e902a1251dec4a2d41d368b988a0" Jan 03 03:39:42 crc kubenswrapper[4746]: I0103 03:39:42.069817 4746 scope.go:117] "RemoveContainer" containerID="bc893040ff857e35968ef8ef2bd18b2eb36280b83f5035a213cbe8ee62e16e21" Jan 03 03:39:42 crc kubenswrapper[4746]: I0103 03:39:42.100930 4746 scope.go:117] "RemoveContainer" containerID="0dc46200456d02e0528d713ed47f068152d526f711f7bc606b2054e92fbf792d" Jan 03 03:39:42 crc kubenswrapper[4746]: I0103 03:39:42.136721 4746 scope.go:117] "RemoveContainer" containerID="099ca9c676227b3ce1a7cdbfe483f97f503619a4559deb068507232f34210985" Jan 03 03:39:42 crc kubenswrapper[4746]: I0103 03:39:42.154740 4746 scope.go:117] "RemoveContainer" containerID="7e7e96e28b54d657b50a8b11a3917ff0f1ecf4d9a9d20a95e46b69d17ce69a25" Jan 03 03:39:42 crc kubenswrapper[4746]: I0103 03:39:42.171178 4746 scope.go:117] "RemoveContainer" containerID="42b91fe797247e406f429c6dd3c833224c21730cb6b19203c56d7e1b179e75cf" Jan 03 03:39:44 crc kubenswrapper[4746]: I0103 03:39:44.465215 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:39:44 crc kubenswrapper[4746]: E0103 03:39:44.466830 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:39:56 crc kubenswrapper[4746]: I0103 03:39:56.464901 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:39:56 crc kubenswrapper[4746]: E0103 03:39:56.465584 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.724998 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vvzct"] Jan 03 03:39:57 crc kubenswrapper[4746]: E0103 03:39:57.726449 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" containerName="extract-utilities" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.726644 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" containerName="extract-utilities" Jan 03 03:39:57 crc kubenswrapper[4746]: E0103 03:39:57.726852 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33178598-f340-45dd-8b24-a0e4a39b6c1e" containerName="registry-server" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.727006 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="33178598-f340-45dd-8b24-a0e4a39b6c1e" containerName="registry-server" Jan 03 03:39:57 crc kubenswrapper[4746]: E0103 03:39:57.727163 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33178598-f340-45dd-8b24-a0e4a39b6c1e" containerName="extract-utilities" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.727317 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="33178598-f340-45dd-8b24-a0e4a39b6c1e" containerName="extract-utilities" Jan 03 03:39:57 crc kubenswrapper[4746]: E0103 03:39:57.727475 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb679aef-7b8e-4aac-a5fc-e973fdf37777" containerName="gather" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.727586 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb679aef-7b8e-4aac-a5fc-e973fdf37777" containerName="gather" Jan 03 03:39:57 crc kubenswrapper[4746]: E0103 03:39:57.727734 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33178598-f340-45dd-8b24-a0e4a39b6c1e" containerName="extract-content" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.727909 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="33178598-f340-45dd-8b24-a0e4a39b6c1e" containerName="extract-content" Jan 03 03:39:57 crc kubenswrapper[4746]: E0103 03:39:57.728051 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb679aef-7b8e-4aac-a5fc-e973fdf37777" containerName="copy" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.728213 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb679aef-7b8e-4aac-a5fc-e973fdf37777" containerName="copy" Jan 03 03:39:57 crc kubenswrapper[4746]: E0103 03:39:57.728400 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" containerName="extract-content" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.728551 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" containerName="extract-content" Jan 03 03:39:57 crc kubenswrapper[4746]: E0103 03:39:57.728754 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" containerName="registry-server" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.728930 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" containerName="registry-server" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.729353 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb679aef-7b8e-4aac-a5fc-e973fdf37777" containerName="gather" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.729550 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="33178598-f340-45dd-8b24-a0e4a39b6c1e" containerName="registry-server" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.729782 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a42ecd5-54e3-4d5a-be7b-1fefc345dbe7" containerName="registry-server" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.729972 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb679aef-7b8e-4aac-a5fc-e973fdf37777" containerName="copy" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.732030 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.737330 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvzct"] Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.925126 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9q7p\" (UniqueName: \"kubernetes.io/projected/c31df03b-6f9f-4f13-9099-c37e29186d3c-kube-api-access-t9q7p\") pod \"redhat-operators-vvzct\" (UID: \"c31df03b-6f9f-4f13-9099-c37e29186d3c\") " pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.925234 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31df03b-6f9f-4f13-9099-c37e29186d3c-catalog-content\") pod \"redhat-operators-vvzct\" (UID: \"c31df03b-6f9f-4f13-9099-c37e29186d3c\") " pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:39:57 crc kubenswrapper[4746]: I0103 03:39:57.925266 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31df03b-6f9f-4f13-9099-c37e29186d3c-utilities\") pod \"redhat-operators-vvzct\" (UID: \"c31df03b-6f9f-4f13-9099-c37e29186d3c\") " pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:39:58 crc kubenswrapper[4746]: I0103 03:39:58.026959 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31df03b-6f9f-4f13-9099-c37e29186d3c-catalog-content\") pod \"redhat-operators-vvzct\" (UID: \"c31df03b-6f9f-4f13-9099-c37e29186d3c\") " pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:39:58 crc kubenswrapper[4746]: I0103 03:39:58.027236 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31df03b-6f9f-4f13-9099-c37e29186d3c-utilities\") pod \"redhat-operators-vvzct\" (UID: \"c31df03b-6f9f-4f13-9099-c37e29186d3c\") " pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:39:58 crc kubenswrapper[4746]: I0103 03:39:58.027349 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9q7p\" (UniqueName: \"kubernetes.io/projected/c31df03b-6f9f-4f13-9099-c37e29186d3c-kube-api-access-t9q7p\") pod \"redhat-operators-vvzct\" (UID: \"c31df03b-6f9f-4f13-9099-c37e29186d3c\") " pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:39:58 crc kubenswrapper[4746]: I0103 03:39:58.027650 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31df03b-6f9f-4f13-9099-c37e29186d3c-catalog-content\") pod \"redhat-operators-vvzct\" (UID: \"c31df03b-6f9f-4f13-9099-c37e29186d3c\") " pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:39:58 crc kubenswrapper[4746]: I0103 03:39:58.027898 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31df03b-6f9f-4f13-9099-c37e29186d3c-utilities\") pod \"redhat-operators-vvzct\" (UID: \"c31df03b-6f9f-4f13-9099-c37e29186d3c\") " pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:39:58 crc kubenswrapper[4746]: I0103 03:39:58.053186 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9q7p\" (UniqueName: \"kubernetes.io/projected/c31df03b-6f9f-4f13-9099-c37e29186d3c-kube-api-access-t9q7p\") pod \"redhat-operators-vvzct\" (UID: \"c31df03b-6f9f-4f13-9099-c37e29186d3c\") " pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:39:58 crc kubenswrapper[4746]: I0103 03:39:58.090440 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:39:58 crc kubenswrapper[4746]: I0103 03:39:58.525155 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvzct"] Jan 03 03:39:58 crc kubenswrapper[4746]: I0103 03:39:58.622374 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvzct" event={"ID":"c31df03b-6f9f-4f13-9099-c37e29186d3c","Type":"ContainerStarted","Data":"0225d238e6958d0e951930b3bf44f9617cabbd6381143b471199afeefefa82f7"} Jan 03 03:39:59 crc kubenswrapper[4746]: I0103 03:39:59.632025 4746 generic.go:334] "Generic (PLEG): container finished" podID="c31df03b-6f9f-4f13-9099-c37e29186d3c" containerID="ba6044b6da3cfcaa635da19a2da6fdeb98bb4a47b41c97103472e704bfd2d32a" exitCode=0 Jan 03 03:39:59 crc kubenswrapper[4746]: I0103 03:39:59.632086 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvzct" event={"ID":"c31df03b-6f9f-4f13-9099-c37e29186d3c","Type":"ContainerDied","Data":"ba6044b6da3cfcaa635da19a2da6fdeb98bb4a47b41c97103472e704bfd2d32a"} Jan 03 03:40:00 crc kubenswrapper[4746]: I0103 03:40:00.641091 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvzct" event={"ID":"c31df03b-6f9f-4f13-9099-c37e29186d3c","Type":"ContainerStarted","Data":"ceb8285cdd87defd6a71bbb8d43d6fa5cad5083385a997e5a0e005a6563787a5"} Jan 03 03:40:01 crc kubenswrapper[4746]: I0103 03:40:01.651625 4746 generic.go:334] "Generic (PLEG): container finished" podID="c31df03b-6f9f-4f13-9099-c37e29186d3c" containerID="ceb8285cdd87defd6a71bbb8d43d6fa5cad5083385a997e5a0e005a6563787a5" exitCode=0 Jan 03 03:40:01 crc kubenswrapper[4746]: I0103 03:40:01.651741 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvzct" event={"ID":"c31df03b-6f9f-4f13-9099-c37e29186d3c","Type":"ContainerDied","Data":"ceb8285cdd87defd6a71bbb8d43d6fa5cad5083385a997e5a0e005a6563787a5"} Jan 03 03:40:03 crc kubenswrapper[4746]: I0103 03:40:03.665154 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvzct" event={"ID":"c31df03b-6f9f-4f13-9099-c37e29186d3c","Type":"ContainerStarted","Data":"0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534"} Jan 03 03:40:03 crc kubenswrapper[4746]: I0103 03:40:03.683841 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vvzct" podStartSLOduration=3.821811177 podStartE2EDuration="6.683821522s" podCreationTimestamp="2026-01-03 03:39:57 +0000 UTC" firstStartedPulling="2026-01-03 03:39:59.634018988 +0000 UTC m=+1519.483909323" lastFinishedPulling="2026-01-03 03:40:02.496029353 +0000 UTC m=+1522.345919668" observedRunningTime="2026-01-03 03:40:03.682196932 +0000 UTC m=+1523.532087237" watchObservedRunningTime="2026-01-03 03:40:03.683821522 +0000 UTC m=+1523.533711837" Jan 03 03:40:07 crc kubenswrapper[4746]: I0103 03:40:07.464502 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:40:07 crc kubenswrapper[4746]: E0103 03:40:07.464984 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:40:08 crc kubenswrapper[4746]: I0103 03:40:08.091581 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:40:08 crc kubenswrapper[4746]: I0103 03:40:08.091633 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:40:09 crc kubenswrapper[4746]: I0103 03:40:09.129820 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvzct" podUID="c31df03b-6f9f-4f13-9099-c37e29186d3c" containerName="registry-server" probeResult="failure" output=< Jan 03 03:40:09 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Jan 03 03:40:09 crc kubenswrapper[4746]: > Jan 03 03:40:18 crc kubenswrapper[4746]: I0103 03:40:18.132498 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:40:18 crc kubenswrapper[4746]: I0103 03:40:18.186733 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:40:18 crc kubenswrapper[4746]: I0103 03:40:18.370926 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvzct"] Jan 03 03:40:19 crc kubenswrapper[4746]: I0103 03:40:19.777007 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vvzct" podUID="c31df03b-6f9f-4f13-9099-c37e29186d3c" containerName="registry-server" containerID="cri-o://0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534" gracePeriod=2 Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.159824 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.245685 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31df03b-6f9f-4f13-9099-c37e29186d3c-utilities\") pod \"c31df03b-6f9f-4f13-9099-c37e29186d3c\" (UID: \"c31df03b-6f9f-4f13-9099-c37e29186d3c\") " Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.245747 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9q7p\" (UniqueName: \"kubernetes.io/projected/c31df03b-6f9f-4f13-9099-c37e29186d3c-kube-api-access-t9q7p\") pod \"c31df03b-6f9f-4f13-9099-c37e29186d3c\" (UID: \"c31df03b-6f9f-4f13-9099-c37e29186d3c\") " Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.245819 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31df03b-6f9f-4f13-9099-c37e29186d3c-catalog-content\") pod \"c31df03b-6f9f-4f13-9099-c37e29186d3c\" (UID: \"c31df03b-6f9f-4f13-9099-c37e29186d3c\") " Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.246531 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c31df03b-6f9f-4f13-9099-c37e29186d3c-utilities" (OuterVolumeSpecName: "utilities") pod "c31df03b-6f9f-4f13-9099-c37e29186d3c" (UID: "c31df03b-6f9f-4f13-9099-c37e29186d3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.250996 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31df03b-6f9f-4f13-9099-c37e29186d3c-kube-api-access-t9q7p" (OuterVolumeSpecName: "kube-api-access-t9q7p") pod "c31df03b-6f9f-4f13-9099-c37e29186d3c" (UID: "c31df03b-6f9f-4f13-9099-c37e29186d3c"). InnerVolumeSpecName "kube-api-access-t9q7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.346405 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31df03b-6f9f-4f13-9099-c37e29186d3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.346437 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9q7p\" (UniqueName: \"kubernetes.io/projected/c31df03b-6f9f-4f13-9099-c37e29186d3c-kube-api-access-t9q7p\") on node \"crc\" DevicePath \"\"" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.420969 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c31df03b-6f9f-4f13-9099-c37e29186d3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c31df03b-6f9f-4f13-9099-c37e29186d3c" (UID: "c31df03b-6f9f-4f13-9099-c37e29186d3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.448072 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31df03b-6f9f-4f13-9099-c37e29186d3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.789727 4746 generic.go:334] "Generic (PLEG): container finished" podID="c31df03b-6f9f-4f13-9099-c37e29186d3c" containerID="0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534" exitCode=0 Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.789796 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvzct" event={"ID":"c31df03b-6f9f-4f13-9099-c37e29186d3c","Type":"ContainerDied","Data":"0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534"} Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.789834 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvzct" event={"ID":"c31df03b-6f9f-4f13-9099-c37e29186d3c","Type":"ContainerDied","Data":"0225d238e6958d0e951930b3bf44f9617cabbd6381143b471199afeefefa82f7"} Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.789864 4746 scope.go:117] "RemoveContainer" containerID="0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.790058 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvzct" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.825402 4746 scope.go:117] "RemoveContainer" containerID="ceb8285cdd87defd6a71bbb8d43d6fa5cad5083385a997e5a0e005a6563787a5" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.825407 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvzct"] Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.849236 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vvzct"] Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.854097 4746 scope.go:117] "RemoveContainer" containerID="ba6044b6da3cfcaa635da19a2da6fdeb98bb4a47b41c97103472e704bfd2d32a" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.877041 4746 scope.go:117] "RemoveContainer" containerID="0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534" Jan 03 03:40:20 crc kubenswrapper[4746]: E0103 03:40:20.877759 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534\": container with ID starting with 0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534 not found: ID does not exist" containerID="0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.877896 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534"} err="failed to get container status \"0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534\": rpc error: code = NotFound desc = could not find container \"0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534\": container with ID starting with 0c1d68163d15d299df47b174761d8aecd4b842019cfbd3c86fbe2680ac750534 not found: ID does not exist" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.878785 4746 scope.go:117] "RemoveContainer" containerID="ceb8285cdd87defd6a71bbb8d43d6fa5cad5083385a997e5a0e005a6563787a5" Jan 03 03:40:20 crc kubenswrapper[4746]: E0103 03:40:20.879388 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb8285cdd87defd6a71bbb8d43d6fa5cad5083385a997e5a0e005a6563787a5\": container with ID starting with ceb8285cdd87defd6a71bbb8d43d6fa5cad5083385a997e5a0e005a6563787a5 not found: ID does not exist" containerID="ceb8285cdd87defd6a71bbb8d43d6fa5cad5083385a997e5a0e005a6563787a5" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.879430 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb8285cdd87defd6a71bbb8d43d6fa5cad5083385a997e5a0e005a6563787a5"} err="failed to get container status \"ceb8285cdd87defd6a71bbb8d43d6fa5cad5083385a997e5a0e005a6563787a5\": rpc error: code = NotFound desc = could not find container \"ceb8285cdd87defd6a71bbb8d43d6fa5cad5083385a997e5a0e005a6563787a5\": container with ID starting with ceb8285cdd87defd6a71bbb8d43d6fa5cad5083385a997e5a0e005a6563787a5 not found: ID does not exist" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.879481 4746 scope.go:117] "RemoveContainer" containerID="ba6044b6da3cfcaa635da19a2da6fdeb98bb4a47b41c97103472e704bfd2d32a" Jan 03 03:40:20 crc kubenswrapper[4746]: E0103 03:40:20.880211 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6044b6da3cfcaa635da19a2da6fdeb98bb4a47b41c97103472e704bfd2d32a\": container with ID starting with ba6044b6da3cfcaa635da19a2da6fdeb98bb4a47b41c97103472e704bfd2d32a not found: ID does not exist" containerID="ba6044b6da3cfcaa635da19a2da6fdeb98bb4a47b41c97103472e704bfd2d32a" Jan 03 03:40:20 crc kubenswrapper[4746]: I0103 03:40:20.880278 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6044b6da3cfcaa635da19a2da6fdeb98bb4a47b41c97103472e704bfd2d32a"} err="failed to get container status \"ba6044b6da3cfcaa635da19a2da6fdeb98bb4a47b41c97103472e704bfd2d32a\": rpc error: code = NotFound desc = could not find container \"ba6044b6da3cfcaa635da19a2da6fdeb98bb4a47b41c97103472e704bfd2d32a\": container with ID starting with ba6044b6da3cfcaa635da19a2da6fdeb98bb4a47b41c97103472e704bfd2d32a not found: ID does not exist" Jan 03 03:40:22 crc kubenswrapper[4746]: I0103 03:40:22.465220 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:40:22 crc kubenswrapper[4746]: E0103 03:40:22.465724 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:40:22 crc kubenswrapper[4746]: I0103 03:40:22.474295 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c31df03b-6f9f-4f13-9099-c37e29186d3c" path="/var/lib/kubelet/pods/c31df03b-6f9f-4f13-9099-c37e29186d3c/volumes" Jan 03 03:40:35 crc kubenswrapper[4746]: I0103 03:40:35.465289 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:40:35 crc kubenswrapper[4746]: E0103 03:40:35.466567 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:40:42 crc kubenswrapper[4746]: I0103 03:40:42.275435 4746 scope.go:117] "RemoveContainer" containerID="f60c54dc7f0e59c332cd801e222f1b81506c63f5ce617096f9e1447b568b53a7" Jan 03 03:40:42 crc kubenswrapper[4746]: I0103 03:40:42.293751 4746 scope.go:117] "RemoveContainer" containerID="32ddbc8e2acdb5d7575bd47f1cd6d7ac981696f28d5d7a57547d88b32c62eea6" Jan 03 03:40:42 crc kubenswrapper[4746]: I0103 03:40:42.319394 4746 scope.go:117] "RemoveContainer" containerID="568c2550e150795b1ad319b218a514908e2ac8ff243666b375318bfbb1388104" Jan 03 03:40:42 crc kubenswrapper[4746]: I0103 03:40:42.350206 4746 scope.go:117] "RemoveContainer" containerID="ffa86cf3a73df6b34f8d86eb97c27d43d6e75b6b7f3ea19a7af5a3b88afd6d9f" Jan 03 03:40:42 crc kubenswrapper[4746]: I0103 03:40:42.383081 4746 scope.go:117] "RemoveContainer" containerID="dfa75fa462d2e01b13eab237fc53682ed9b3833471de7fdadbe87072a0c787ba" Jan 03 03:40:42 crc kubenswrapper[4746]: I0103 03:40:42.405493 4746 scope.go:117] "RemoveContainer" containerID="af22d6cc1e27a251921b7332fbb505a31ad4d599b4de6e07e1bc6f49f48c84a3" Jan 03 03:40:42 crc kubenswrapper[4746]: I0103 03:40:42.427827 4746 scope.go:117] "RemoveContainer" containerID="5674d57dd1cb733894c446f9964dc3b4916dda1e2d31609ae504990a94f15e87" Jan 03 03:40:42 crc kubenswrapper[4746]: I0103 03:40:42.465317 4746 scope.go:117] "RemoveContainer" containerID="0fa0008a11adce4cabe3778c96418ba97bc03a27e34af51bfc9bd4bf2294c880" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.677926 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5xwfs/must-gather-57vtf"] Jan 03 03:40:46 crc kubenswrapper[4746]: E0103 03:40:46.678529 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31df03b-6f9f-4f13-9099-c37e29186d3c" containerName="extract-utilities" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.678543 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31df03b-6f9f-4f13-9099-c37e29186d3c" containerName="extract-utilities" Jan 03 03:40:46 crc kubenswrapper[4746]: E0103 03:40:46.678554 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31df03b-6f9f-4f13-9099-c37e29186d3c" containerName="registry-server" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.678562 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31df03b-6f9f-4f13-9099-c37e29186d3c" containerName="registry-server" Jan 03 03:40:46 crc kubenswrapper[4746]: E0103 03:40:46.678576 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31df03b-6f9f-4f13-9099-c37e29186d3c" containerName="extract-content" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.678585 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31df03b-6f9f-4f13-9099-c37e29186d3c" containerName="extract-content" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.678748 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31df03b-6f9f-4f13-9099-c37e29186d3c" containerName="registry-server" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.679467 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xwfs/must-gather-57vtf" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.682386 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5xwfs"/"kube-root-ca.crt" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.682746 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5xwfs"/"openshift-service-ca.crt" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.690242 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5xwfs/must-gather-57vtf"] Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.747881 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1442c41-6053-4574-bd04-0c5530456a5e-must-gather-output\") pod \"must-gather-57vtf\" (UID: \"e1442c41-6053-4574-bd04-0c5530456a5e\") " pod="openshift-must-gather-5xwfs/must-gather-57vtf" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.747934 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzw6c\" (UniqueName: \"kubernetes.io/projected/e1442c41-6053-4574-bd04-0c5530456a5e-kube-api-access-kzw6c\") pod \"must-gather-57vtf\" (UID: \"e1442c41-6053-4574-bd04-0c5530456a5e\") " pod="openshift-must-gather-5xwfs/must-gather-57vtf" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.848872 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1442c41-6053-4574-bd04-0c5530456a5e-must-gather-output\") pod \"must-gather-57vtf\" (UID: \"e1442c41-6053-4574-bd04-0c5530456a5e\") " pod="openshift-must-gather-5xwfs/must-gather-57vtf" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.849183 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzw6c\" (UniqueName: \"kubernetes.io/projected/e1442c41-6053-4574-bd04-0c5530456a5e-kube-api-access-kzw6c\") pod \"must-gather-57vtf\" (UID: \"e1442c41-6053-4574-bd04-0c5530456a5e\") " pod="openshift-must-gather-5xwfs/must-gather-57vtf" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.849333 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1442c41-6053-4574-bd04-0c5530456a5e-must-gather-output\") pod \"must-gather-57vtf\" (UID: \"e1442c41-6053-4574-bd04-0c5530456a5e\") " pod="openshift-must-gather-5xwfs/must-gather-57vtf" Jan 03 03:40:46 crc kubenswrapper[4746]: I0103 03:40:46.866319 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzw6c\" (UniqueName: \"kubernetes.io/projected/e1442c41-6053-4574-bd04-0c5530456a5e-kube-api-access-kzw6c\") pod \"must-gather-57vtf\" (UID: \"e1442c41-6053-4574-bd04-0c5530456a5e\") " pod="openshift-must-gather-5xwfs/must-gather-57vtf" Jan 03 03:40:47 crc kubenswrapper[4746]: I0103 03:40:47.005098 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xwfs/must-gather-57vtf" Jan 03 03:40:47 crc kubenswrapper[4746]: I0103 03:40:47.206125 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5xwfs/must-gather-57vtf"] Jan 03 03:40:47 crc kubenswrapper[4746]: I0103 03:40:47.465281 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:40:47 crc kubenswrapper[4746]: E0103 03:40:47.465491 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:40:48 crc kubenswrapper[4746]: I0103 03:40:48.118406 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xwfs/must-gather-57vtf" event={"ID":"e1442c41-6053-4574-bd04-0c5530456a5e","Type":"ContainerStarted","Data":"386e522a682553fea3efc9a0bdd60ae417dda6708ac6a2f053b843b49394bdd3"} Jan 03 03:40:48 crc kubenswrapper[4746]: I0103 03:40:48.118477 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xwfs/must-gather-57vtf" event={"ID":"e1442c41-6053-4574-bd04-0c5530456a5e","Type":"ContainerStarted","Data":"444644cf4e4201e8095ccbb87f4886eadb994940716b86cb8b99864fd1d1c98c"} Jan 03 03:40:48 crc kubenswrapper[4746]: I0103 03:40:48.118492 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xwfs/must-gather-57vtf" event={"ID":"e1442c41-6053-4574-bd04-0c5530456a5e","Type":"ContainerStarted","Data":"6f52c09aec66c211eb356972eb54c057bd9a73dcfacd6885ac5f2ff7a3cb58f9"} Jan 03 03:40:48 crc kubenswrapper[4746]: I0103 03:40:48.139208 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5xwfs/must-gather-57vtf" podStartSLOduration=2.139190403 podStartE2EDuration="2.139190403s" podCreationTimestamp="2026-01-03 03:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-03 03:40:48.136041536 +0000 UTC m=+1567.985931841" watchObservedRunningTime="2026-01-03 03:40:48.139190403 +0000 UTC m=+1567.989080708" Jan 03 03:41:01 crc kubenswrapper[4746]: I0103 03:41:01.465167 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:41:01 crc kubenswrapper[4746]: E0103 03:41:01.465969 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:41:16 crc kubenswrapper[4746]: I0103 03:41:16.464552 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:41:16 crc kubenswrapper[4746]: E0103 03:41:16.465963 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.046214 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gtqfh"] Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.048003 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.067425 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtqfh"] Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.213437 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2p7x\" (UniqueName: \"kubernetes.io/projected/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-kube-api-access-v2p7x\") pod \"community-operators-gtqfh\" (UID: \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\") " pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.213494 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-utilities\") pod \"community-operators-gtqfh\" (UID: \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\") " pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.213525 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-catalog-content\") pod \"community-operators-gtqfh\" (UID: \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\") " pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.315020 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2p7x\" (UniqueName: \"kubernetes.io/projected/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-kube-api-access-v2p7x\") pod \"community-operators-gtqfh\" (UID: \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\") " pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.315384 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-utilities\") pod \"community-operators-gtqfh\" (UID: \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\") " pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.315476 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-catalog-content\") pod \"community-operators-gtqfh\" (UID: \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\") " pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.316024 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-catalog-content\") pod \"community-operators-gtqfh\" (UID: \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\") " pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.316185 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-utilities\") pod \"community-operators-gtqfh\" (UID: \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\") " pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.343986 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2p7x\" (UniqueName: \"kubernetes.io/projected/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-kube-api-access-v2p7x\") pod \"community-operators-gtqfh\" (UID: \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\") " pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.413007 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:23 crc kubenswrapper[4746]: I0103 03:41:23.916102 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtqfh"] Jan 03 03:41:24 crc kubenswrapper[4746]: I0103 03:41:24.315145 4746 generic.go:334] "Generic (PLEG): container finished" podID="3d0b7981-8670-4f0e-b763-d9fae1ebf64f" containerID="035f39818a7baa53cf059b196b09855f54ad768881a25a945227154beaa7b2a2" exitCode=0 Jan 03 03:41:24 crc kubenswrapper[4746]: I0103 03:41:24.315196 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtqfh" event={"ID":"3d0b7981-8670-4f0e-b763-d9fae1ebf64f","Type":"ContainerDied","Data":"035f39818a7baa53cf059b196b09855f54ad768881a25a945227154beaa7b2a2"} Jan 03 03:41:24 crc kubenswrapper[4746]: I0103 03:41:24.315249 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtqfh" event={"ID":"3d0b7981-8670-4f0e-b763-d9fae1ebf64f","Type":"ContainerStarted","Data":"e191ffbb157d0f734ed9908f43823fb682bb5d6e929499cddd84b0e000c2f650"} Jan 03 03:41:25 crc kubenswrapper[4746]: I0103 03:41:25.321602 4746 generic.go:334] "Generic (PLEG): container finished" podID="3d0b7981-8670-4f0e-b763-d9fae1ebf64f" containerID="ba4d590efd0bc765d3a88de792266b56005bfb37d1350acf6e132e9e92878a71" exitCode=0 Jan 03 03:41:25 crc kubenswrapper[4746]: I0103 03:41:25.321692 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtqfh" event={"ID":"3d0b7981-8670-4f0e-b763-d9fae1ebf64f","Type":"ContainerDied","Data":"ba4d590efd0bc765d3a88de792266b56005bfb37d1350acf6e132e9e92878a71"} Jan 03 03:41:26 crc kubenswrapper[4746]: I0103 03:41:26.328603 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtqfh" event={"ID":"3d0b7981-8670-4f0e-b763-d9fae1ebf64f","Type":"ContainerStarted","Data":"f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96"} Jan 03 03:41:26 crc kubenswrapper[4746]: I0103 03:41:26.353603 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gtqfh" podStartSLOduration=1.827503012 podStartE2EDuration="3.35357652s" podCreationTimestamp="2026-01-03 03:41:23 +0000 UTC" firstStartedPulling="2026-01-03 03:41:24.317293806 +0000 UTC m=+1604.167184111" lastFinishedPulling="2026-01-03 03:41:25.843367314 +0000 UTC m=+1605.693257619" observedRunningTime="2026-01-03 03:41:26.350123736 +0000 UTC m=+1606.200014041" watchObservedRunningTime="2026-01-03 03:41:26.35357652 +0000 UTC m=+1606.203466845" Jan 03 03:41:31 crc kubenswrapper[4746]: I0103 03:41:31.464897 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:41:31 crc kubenswrapper[4746]: E0103 03:41:31.465513 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:41:32 crc kubenswrapper[4746]: I0103 03:41:32.722811 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bj7mx_5bebc5d9-35a7-4154-873e-65d60f85f9b6/control-plane-machine-set-operator/0.log" Jan 03 03:41:32 crc kubenswrapper[4746]: I0103 03:41:32.877119 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-d58zr_7a51c938-dfaf-4222-afb6-0cd79e445537/kube-rbac-proxy/0.log" Jan 03 03:41:32 crc kubenswrapper[4746]: I0103 03:41:32.926731 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-d58zr_7a51c938-dfaf-4222-afb6-0cd79e445537/machine-api-operator/0.log" Jan 03 03:41:33 crc kubenswrapper[4746]: I0103 03:41:33.414116 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:33 crc kubenswrapper[4746]: I0103 03:41:33.414162 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:33 crc kubenswrapper[4746]: I0103 03:41:33.478184 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:35 crc kubenswrapper[4746]: I0103 03:41:35.617945 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:35 crc kubenswrapper[4746]: I0103 03:41:35.663945 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtqfh"] Jan 03 03:41:36 crc kubenswrapper[4746]: I0103 03:41:36.588632 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gtqfh" podUID="3d0b7981-8670-4f0e-b763-d9fae1ebf64f" containerName="registry-server" containerID="cri-o://f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96" gracePeriod=2 Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.084599 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.115797 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-utilities\") pod \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\" (UID: \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\") " Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.115901 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-catalog-content\") pod \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\" (UID: \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\") " Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.115925 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2p7x\" (UniqueName: \"kubernetes.io/projected/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-kube-api-access-v2p7x\") pod \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\" (UID: \"3d0b7981-8670-4f0e-b763-d9fae1ebf64f\") " Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.116556 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-utilities" (OuterVolumeSpecName: "utilities") pod "3d0b7981-8670-4f0e-b763-d9fae1ebf64f" (UID: "3d0b7981-8670-4f0e-b763-d9fae1ebf64f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.161162 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d0b7981-8670-4f0e-b763-d9fae1ebf64f" (UID: "3d0b7981-8670-4f0e-b763-d9fae1ebf64f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.164853 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-kube-api-access-v2p7x" (OuterVolumeSpecName: "kube-api-access-v2p7x") pod "3d0b7981-8670-4f0e-b763-d9fae1ebf64f" (UID: "3d0b7981-8670-4f0e-b763-d9fae1ebf64f"). InnerVolumeSpecName "kube-api-access-v2p7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.217852 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-utilities\") on node \"crc\" DevicePath \"\"" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.217884 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.217895 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2p7x\" (UniqueName: \"kubernetes.io/projected/3d0b7981-8670-4f0e-b763-d9fae1ebf64f-kube-api-access-v2p7x\") on node \"crc\" DevicePath \"\"" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.603368 4746 generic.go:334] "Generic (PLEG): container finished" podID="3d0b7981-8670-4f0e-b763-d9fae1ebf64f" containerID="f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96" exitCode=0 Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.603599 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtqfh" event={"ID":"3d0b7981-8670-4f0e-b763-d9fae1ebf64f","Type":"ContainerDied","Data":"f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96"} Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.603744 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtqfh" event={"ID":"3d0b7981-8670-4f0e-b763-d9fae1ebf64f","Type":"ContainerDied","Data":"e191ffbb157d0f734ed9908f43823fb682bb5d6e929499cddd84b0e000c2f650"} Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.603772 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtqfh" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.603809 4746 scope.go:117] "RemoveContainer" containerID="f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.629223 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtqfh"] Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.636224 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gtqfh"] Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.637709 4746 scope.go:117] "RemoveContainer" containerID="ba4d590efd0bc765d3a88de792266b56005bfb37d1350acf6e132e9e92878a71" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.652761 4746 scope.go:117] "RemoveContainer" containerID="035f39818a7baa53cf059b196b09855f54ad768881a25a945227154beaa7b2a2" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.676187 4746 scope.go:117] "RemoveContainer" containerID="f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96" Jan 03 03:41:38 crc kubenswrapper[4746]: E0103 03:41:38.676859 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96\": container with ID starting with f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96 not found: ID does not exist" containerID="f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.676901 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96"} err="failed to get container status \"f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96\": rpc error: code = NotFound desc = could not find container \"f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96\": container with ID starting with f35d82a6f03317bad46e5a03d19ab37dbed0f469662fa7ad496f279f5067ff96 not found: ID does not exist" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.676929 4746 scope.go:117] "RemoveContainer" containerID="ba4d590efd0bc765d3a88de792266b56005bfb37d1350acf6e132e9e92878a71" Jan 03 03:41:38 crc kubenswrapper[4746]: E0103 03:41:38.677275 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba4d590efd0bc765d3a88de792266b56005bfb37d1350acf6e132e9e92878a71\": container with ID starting with ba4d590efd0bc765d3a88de792266b56005bfb37d1350acf6e132e9e92878a71 not found: ID does not exist" containerID="ba4d590efd0bc765d3a88de792266b56005bfb37d1350acf6e132e9e92878a71" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.677308 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4d590efd0bc765d3a88de792266b56005bfb37d1350acf6e132e9e92878a71"} err="failed to get container status \"ba4d590efd0bc765d3a88de792266b56005bfb37d1350acf6e132e9e92878a71\": rpc error: code = NotFound desc = could not find container \"ba4d590efd0bc765d3a88de792266b56005bfb37d1350acf6e132e9e92878a71\": container with ID starting with ba4d590efd0bc765d3a88de792266b56005bfb37d1350acf6e132e9e92878a71 not found: ID does not exist" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.677323 4746 scope.go:117] "RemoveContainer" containerID="035f39818a7baa53cf059b196b09855f54ad768881a25a945227154beaa7b2a2" Jan 03 03:41:38 crc kubenswrapper[4746]: E0103 03:41:38.677670 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"035f39818a7baa53cf059b196b09855f54ad768881a25a945227154beaa7b2a2\": container with ID starting with 035f39818a7baa53cf059b196b09855f54ad768881a25a945227154beaa7b2a2 not found: ID does not exist" containerID="035f39818a7baa53cf059b196b09855f54ad768881a25a945227154beaa7b2a2" Jan 03 03:41:38 crc kubenswrapper[4746]: I0103 03:41:38.677695 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035f39818a7baa53cf059b196b09855f54ad768881a25a945227154beaa7b2a2"} err="failed to get container status \"035f39818a7baa53cf059b196b09855f54ad768881a25a945227154beaa7b2a2\": rpc error: code = NotFound desc = could not find container \"035f39818a7baa53cf059b196b09855f54ad768881a25a945227154beaa7b2a2\": container with ID starting with 035f39818a7baa53cf059b196b09855f54ad768881a25a945227154beaa7b2a2 not found: ID does not exist" Jan 03 03:41:40 crc kubenswrapper[4746]: I0103 03:41:40.471689 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d0b7981-8670-4f0e-b763-d9fae1ebf64f" path="/var/lib/kubelet/pods/3d0b7981-8670-4f0e-b763-d9fae1ebf64f/volumes" Jan 03 03:41:42 crc kubenswrapper[4746]: I0103 03:41:42.465309 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:41:42 crc kubenswrapper[4746]: E0103 03:41:42.465820 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:41:49 crc kubenswrapper[4746]: I0103 03:41:49.374946 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-tz4v5_1ebc1074-93c2-408f-bad5-0392529562c7/kube-rbac-proxy/0.log" Jan 03 03:41:49 crc kubenswrapper[4746]: I0103 03:41:49.383238 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-tz4v5_1ebc1074-93c2-408f-bad5-0392529562c7/controller/0.log" Jan 03 03:41:49 crc kubenswrapper[4746]: I0103 03:41:49.549981 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-frr-files/0.log" Jan 03 03:41:49 crc kubenswrapper[4746]: I0103 03:41:49.726565 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-frr-files/0.log" Jan 03 03:41:49 crc kubenswrapper[4746]: I0103 03:41:49.730097 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-reloader/0.log" Jan 03 03:41:49 crc kubenswrapper[4746]: I0103 03:41:49.751210 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-metrics/0.log" Jan 03 03:41:49 crc kubenswrapper[4746]: I0103 03:41:49.790562 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-reloader/0.log" Jan 03 03:41:49 crc kubenswrapper[4746]: I0103 03:41:49.927755 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-metrics/0.log" Jan 03 03:41:49 crc kubenswrapper[4746]: I0103 03:41:49.963113 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-metrics/0.log" Jan 03 03:41:49 crc kubenswrapper[4746]: I0103 03:41:49.977528 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-reloader/0.log" Jan 03 03:41:49 crc kubenswrapper[4746]: I0103 03:41:49.980453 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-frr-files/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.184185 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-reloader/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.187332 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-metrics/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.203950 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/cp-frr-files/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.230570 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/controller/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.350267 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/frr-metrics/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.401289 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/kube-rbac-proxy/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.442543 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/kube-rbac-proxy-frr/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.590533 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/reloader/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.665960 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-ztjlw_2883eb8b-d6db-4ede-bf40-cb8aee643105/frr-k8s-webhook-server/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.761455 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-clkjf_c58c9579-76cf-457e-a5da-ba83edbf0960/frr/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.801808 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5bbcffc974-mlzdd_6cb321dd-38a5-424e-a99d-f6594f2aa06e/manager/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.952380 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5657bbd6cc-tnqph_bbecca3d-7406-432b-995f-9a7ef95f6c01/webhook-server/0.log" Jan 03 03:41:50 crc kubenswrapper[4746]: I0103 03:41:50.997075 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gsdzz_460c2eb9-1e8c-499c-871b-a4bcf6fe99a1/kube-rbac-proxy/0.log" Jan 03 03:41:51 crc kubenswrapper[4746]: I0103 03:41:51.167179 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gsdzz_460c2eb9-1e8c-499c-871b-a4bcf6fe99a1/speaker/0.log" Jan 03 03:41:54 crc kubenswrapper[4746]: I0103 03:41:54.465105 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:41:54 crc kubenswrapper[4746]: E0103 03:41:54.465578 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:42:08 crc kubenswrapper[4746]: I0103 03:42:08.465535 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:42:08 crc kubenswrapper[4746]: E0103 03:42:08.466876 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:42:16 crc kubenswrapper[4746]: I0103 03:42:16.633389 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/util/0.log" Jan 03 03:42:16 crc kubenswrapper[4746]: I0103 03:42:16.762776 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/util/0.log" Jan 03 03:42:16 crc kubenswrapper[4746]: I0103 03:42:16.796787 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/pull/0.log" Jan 03 03:42:16 crc kubenswrapper[4746]: I0103 03:42:16.815160 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/pull/0.log" Jan 03 03:42:16 crc kubenswrapper[4746]: I0103 03:42:16.984061 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/pull/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.002744 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/util/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.015254 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4952dl_3eb2322f-9ccb-4d4f-8b59-13ef378eaf2c/extract/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.157567 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/extract-utilities/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.335855 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/extract-content/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.347017 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/extract-content/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.353280 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/extract-utilities/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.510170 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/extract-utilities/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.541038 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/extract-content/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.717936 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/extract-utilities/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.828329 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4rbl4_755ed109-a3f1-48c4-8bb2-0af0f2a543cf/registry-server/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.871915 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/extract-content/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.908354 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/extract-content/0.log" Jan 03 03:42:17 crc kubenswrapper[4746]: I0103 03:42:17.923876 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/extract-utilities/0.log" Jan 03 03:42:18 crc kubenswrapper[4746]: I0103 03:42:18.062072 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/extract-utilities/0.log" Jan 03 03:42:18 crc kubenswrapper[4746]: I0103 03:42:18.099252 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/extract-content/0.log" Jan 03 03:42:18 crc kubenswrapper[4746]: I0103 03:42:18.278811 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-t5z9t_66739b82-1665-4781-b791-5b1fa1807d88/marketplace-operator/0.log" Jan 03 03:42:18 crc kubenswrapper[4746]: I0103 03:42:18.426212 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mzqgw_9a35bb44-d6aa-4a48-81b8-feacb81c8dbc/registry-server/0.log" Jan 03 03:42:18 crc kubenswrapper[4746]: I0103 03:42:18.432602 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/extract-utilities/0.log" Jan 03 03:42:18 crc kubenswrapper[4746]: I0103 03:42:18.601475 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/extract-utilities/0.log" Jan 03 03:42:18 crc kubenswrapper[4746]: I0103 03:42:18.610553 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/extract-content/0.log" Jan 03 03:42:18 crc kubenswrapper[4746]: I0103 03:42:18.617100 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/extract-content/0.log" Jan 03 03:42:18 crc kubenswrapper[4746]: I0103 03:42:18.791006 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/extract-utilities/0.log" Jan 03 03:42:18 crc kubenswrapper[4746]: I0103 03:42:18.794803 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/extract-content/0.log" Jan 03 03:42:18 crc kubenswrapper[4746]: I0103 03:42:18.884419 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qtcgf_e8ba7568-2180-4086-85d4-c66dff5b3690/registry-server/0.log" Jan 03 03:42:18 crc kubenswrapper[4746]: I0103 03:42:18.974989 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/extract-utilities/0.log" Jan 03 03:42:19 crc kubenswrapper[4746]: I0103 03:42:19.151706 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/extract-utilities/0.log" Jan 03 03:42:19 crc kubenswrapper[4746]: I0103 03:42:19.156440 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/extract-content/0.log" Jan 03 03:42:19 crc kubenswrapper[4746]: I0103 03:42:19.187553 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/extract-content/0.log" Jan 03 03:42:19 crc kubenswrapper[4746]: I0103 03:42:19.333930 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/extract-utilities/0.log" Jan 03 03:42:19 crc kubenswrapper[4746]: I0103 03:42:19.351262 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/extract-content/0.log" Jan 03 03:42:19 crc kubenswrapper[4746]: I0103 03:42:19.464264 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:42:19 crc kubenswrapper[4746]: E0103 03:42:19.464549 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:42:19 crc kubenswrapper[4746]: I0103 03:42:19.676977 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w8694_88e33a79-0d63-4964-974b-374fa53c1113/registry-server/0.log" Jan 03 03:42:30 crc kubenswrapper[4746]: I0103 03:42:30.472479 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:42:30 crc kubenswrapper[4746]: E0103 03:42:30.473499 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:42:44 crc kubenswrapper[4746]: I0103 03:42:44.464974 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:42:44 crc kubenswrapper[4746]: E0103 03:42:44.465683 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:42:59 crc kubenswrapper[4746]: I0103 03:42:59.465486 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:42:59 crc kubenswrapper[4746]: E0103 03:42:59.466043 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:43:11 crc kubenswrapper[4746]: I0103 03:43:11.465246 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:43:11 crc kubenswrapper[4746]: E0103 03:43:11.466143 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:43:25 crc kubenswrapper[4746]: I0103 03:43:25.464295 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:43:25 crc kubenswrapper[4746]: E0103 03:43:25.464962 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:43:33 crc kubenswrapper[4746]: I0103 03:43:33.574647 4746 generic.go:334] "Generic (PLEG): container finished" podID="e1442c41-6053-4574-bd04-0c5530456a5e" containerID="444644cf4e4201e8095ccbb87f4886eadb994940716b86cb8b99864fd1d1c98c" exitCode=0 Jan 03 03:43:33 crc kubenswrapper[4746]: I0103 03:43:33.574694 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5xwfs/must-gather-57vtf" event={"ID":"e1442c41-6053-4574-bd04-0c5530456a5e","Type":"ContainerDied","Data":"444644cf4e4201e8095ccbb87f4886eadb994940716b86cb8b99864fd1d1c98c"} Jan 03 03:43:33 crc kubenswrapper[4746]: I0103 03:43:33.575471 4746 scope.go:117] "RemoveContainer" containerID="444644cf4e4201e8095ccbb87f4886eadb994940716b86cb8b99864fd1d1c98c" Jan 03 03:43:33 crc kubenswrapper[4746]: I0103 03:43:33.648633 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5xwfs_must-gather-57vtf_e1442c41-6053-4574-bd04-0c5530456a5e/gather/0.log" Jan 03 03:43:39 crc kubenswrapper[4746]: I0103 03:43:39.464282 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:43:39 crc kubenswrapper[4746]: E0103 03:43:39.464974 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.383744 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5xwfs/must-gather-57vtf"] Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.384703 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5xwfs/must-gather-57vtf" podUID="e1442c41-6053-4574-bd04-0c5530456a5e" containerName="copy" containerID="cri-o://386e522a682553fea3efc9a0bdd60ae417dda6708ac6a2f053b843b49394bdd3" gracePeriod=2 Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.392361 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5xwfs/must-gather-57vtf"] Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.639475 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5xwfs_must-gather-57vtf_e1442c41-6053-4574-bd04-0c5530456a5e/copy/0.log" Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.643295 4746 generic.go:334] "Generic (PLEG): container finished" podID="e1442c41-6053-4574-bd04-0c5530456a5e" containerID="386e522a682553fea3efc9a0bdd60ae417dda6708ac6a2f053b843b49394bdd3" exitCode=143 Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.760100 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5xwfs_must-gather-57vtf_e1442c41-6053-4574-bd04-0c5530456a5e/copy/0.log" Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.760796 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xwfs/must-gather-57vtf" Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.833996 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzw6c\" (UniqueName: \"kubernetes.io/projected/e1442c41-6053-4574-bd04-0c5530456a5e-kube-api-access-kzw6c\") pod \"e1442c41-6053-4574-bd04-0c5530456a5e\" (UID: \"e1442c41-6053-4574-bd04-0c5530456a5e\") " Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.834048 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1442c41-6053-4574-bd04-0c5530456a5e-must-gather-output\") pod \"e1442c41-6053-4574-bd04-0c5530456a5e\" (UID: \"e1442c41-6053-4574-bd04-0c5530456a5e\") " Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.840163 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1442c41-6053-4574-bd04-0c5530456a5e-kube-api-access-kzw6c" (OuterVolumeSpecName: "kube-api-access-kzw6c") pod "e1442c41-6053-4574-bd04-0c5530456a5e" (UID: "e1442c41-6053-4574-bd04-0c5530456a5e"). InnerVolumeSpecName "kube-api-access-kzw6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.892017 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1442c41-6053-4574-bd04-0c5530456a5e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e1442c41-6053-4574-bd04-0c5530456a5e" (UID: "e1442c41-6053-4574-bd04-0c5530456a5e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.934983 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzw6c\" (UniqueName: \"kubernetes.io/projected/e1442c41-6053-4574-bd04-0c5530456a5e-kube-api-access-kzw6c\") on node \"crc\" DevicePath \"\"" Jan 03 03:43:43 crc kubenswrapper[4746]: I0103 03:43:43.935011 4746 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1442c41-6053-4574-bd04-0c5530456a5e-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 03 03:43:44 crc kubenswrapper[4746]: I0103 03:43:44.473787 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1442c41-6053-4574-bd04-0c5530456a5e" path="/var/lib/kubelet/pods/e1442c41-6053-4574-bd04-0c5530456a5e/volumes" Jan 03 03:43:44 crc kubenswrapper[4746]: I0103 03:43:44.649823 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5xwfs_must-gather-57vtf_e1442c41-6053-4574-bd04-0c5530456a5e/copy/0.log" Jan 03 03:43:44 crc kubenswrapper[4746]: I0103 03:43:44.650279 4746 scope.go:117] "RemoveContainer" containerID="386e522a682553fea3efc9a0bdd60ae417dda6708ac6a2f053b843b49394bdd3" Jan 03 03:43:44 crc kubenswrapper[4746]: I0103 03:43:44.650334 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5xwfs/must-gather-57vtf" Jan 03 03:43:44 crc kubenswrapper[4746]: I0103 03:43:44.665021 4746 scope.go:117] "RemoveContainer" containerID="444644cf4e4201e8095ccbb87f4886eadb994940716b86cb8b99864fd1d1c98c" Jan 03 03:43:51 crc kubenswrapper[4746]: I0103 03:43:51.465078 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:43:51 crc kubenswrapper[4746]: E0103 03:43:51.467200 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:44:05 crc kubenswrapper[4746]: I0103 03:44:05.465232 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:44:05 crc kubenswrapper[4746]: E0103 03:44:05.466203 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:44:20 crc kubenswrapper[4746]: I0103 03:44:20.468364 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:44:20 crc kubenswrapper[4746]: E0103 03:44:20.469105 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8lt5d_openshift-machine-config-operator(00b3b853-9953-4039-964d-841a01708848)\"" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" podUID="00b3b853-9953-4039-964d-841a01708848" Jan 03 03:44:35 crc kubenswrapper[4746]: I0103 03:44:35.466100 4746 scope.go:117] "RemoveContainer" containerID="34b306ffe885fb6ba923d07faea84cdda571b538c5f66b06fd1d13d57f8bbbcc" Jan 03 03:44:35 crc kubenswrapper[4746]: I0103 03:44:35.962966 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8lt5d" event={"ID":"00b3b853-9953-4039-964d-841a01708848","Type":"ContainerStarted","Data":"316e55064a3b1238dca2a88ea2fc0d49594dfad02a334774cabb9d12d201a2e1"} Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.181855 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8"] Jan 03 03:45:00 crc kubenswrapper[4746]: E0103 03:45:00.183424 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0b7981-8670-4f0e-b763-d9fae1ebf64f" containerName="registry-server" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.183446 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0b7981-8670-4f0e-b763-d9fae1ebf64f" containerName="registry-server" Jan 03 03:45:00 crc kubenswrapper[4746]: E0103 03:45:00.183477 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0b7981-8670-4f0e-b763-d9fae1ebf64f" containerName="extract-content" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.183580 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0b7981-8670-4f0e-b763-d9fae1ebf64f" containerName="extract-content" Jan 03 03:45:00 crc kubenswrapper[4746]: E0103 03:45:00.183591 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0b7981-8670-4f0e-b763-d9fae1ebf64f" containerName="extract-utilities" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.183601 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0b7981-8670-4f0e-b763-d9fae1ebf64f" containerName="extract-utilities" Jan 03 03:45:00 crc kubenswrapper[4746]: E0103 03:45:00.183615 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1442c41-6053-4574-bd04-0c5530456a5e" containerName="gather" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.183622 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1442c41-6053-4574-bd04-0c5530456a5e" containerName="gather" Jan 03 03:45:00 crc kubenswrapper[4746]: E0103 03:45:00.183638 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1442c41-6053-4574-bd04-0c5530456a5e" containerName="copy" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.183646 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1442c41-6053-4574-bd04-0c5530456a5e" containerName="copy" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.183892 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0b7981-8670-4f0e-b763-d9fae1ebf64f" containerName="registry-server" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.183912 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1442c41-6053-4574-bd04-0c5530456a5e" containerName="gather" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.183935 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1442c41-6053-4574-bd04-0c5530456a5e" containerName="copy" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.185561 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.188290 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8"] Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.189079 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.189151 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.311761 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98230655-6a3c-43e6-b2a4-4ab5451c8e28-config-volume\") pod \"collect-profiles-29456865-f27k8\" (UID: \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.311827 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98230655-6a3c-43e6-b2a4-4ab5451c8e28-secret-volume\") pod \"collect-profiles-29456865-f27k8\" (UID: \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.311878 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k589p\" (UniqueName: \"kubernetes.io/projected/98230655-6a3c-43e6-b2a4-4ab5451c8e28-kube-api-access-k589p\") pod \"collect-profiles-29456865-f27k8\" (UID: \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.413233 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98230655-6a3c-43e6-b2a4-4ab5451c8e28-secret-volume\") pod \"collect-profiles-29456865-f27k8\" (UID: \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.413427 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k589p\" (UniqueName: \"kubernetes.io/projected/98230655-6a3c-43e6-b2a4-4ab5451c8e28-kube-api-access-k589p\") pod \"collect-profiles-29456865-f27k8\" (UID: \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.413475 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98230655-6a3c-43e6-b2a4-4ab5451c8e28-config-volume\") pod \"collect-profiles-29456865-f27k8\" (UID: \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.415301 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98230655-6a3c-43e6-b2a4-4ab5451c8e28-config-volume\") pod \"collect-profiles-29456865-f27k8\" (UID: \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.425344 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98230655-6a3c-43e6-b2a4-4ab5451c8e28-secret-volume\") pod \"collect-profiles-29456865-f27k8\" (UID: \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.436883 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k589p\" (UniqueName: \"kubernetes.io/projected/98230655-6a3c-43e6-b2a4-4ab5451c8e28-kube-api-access-k589p\") pod \"collect-profiles-29456865-f27k8\" (UID: \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.509613 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" Jan 03 03:45:00 crc kubenswrapper[4746]: I0103 03:45:00.771855 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8"] Jan 03 03:45:01 crc kubenswrapper[4746]: I0103 03:45:01.143604 4746 generic.go:334] "Generic (PLEG): container finished" podID="98230655-6a3c-43e6-b2a4-4ab5451c8e28" containerID="63d0cfdbd1b74798da226900b279c4d75dc908684411b7ea732330e656ee417f" exitCode=0 Jan 03 03:45:01 crc kubenswrapper[4746]: I0103 03:45:01.143719 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" event={"ID":"98230655-6a3c-43e6-b2a4-4ab5451c8e28","Type":"ContainerDied","Data":"63d0cfdbd1b74798da226900b279c4d75dc908684411b7ea732330e656ee417f"} Jan 03 03:45:01 crc kubenswrapper[4746]: I0103 03:45:01.144241 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" event={"ID":"98230655-6a3c-43e6-b2a4-4ab5451c8e28","Type":"ContainerStarted","Data":"d81f14194b2eba1887cf547e1c148d035d37ec83b200a35f8be1542dd4f04c5a"} Jan 03 03:45:02 crc kubenswrapper[4746]: I0103 03:45:02.410377 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" Jan 03 03:45:02 crc kubenswrapper[4746]: I0103 03:45:02.452440 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k589p\" (UniqueName: \"kubernetes.io/projected/98230655-6a3c-43e6-b2a4-4ab5451c8e28-kube-api-access-k589p\") pod \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\" (UID: \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\") " Jan 03 03:45:02 crc kubenswrapper[4746]: I0103 03:45:02.452547 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98230655-6a3c-43e6-b2a4-4ab5451c8e28-config-volume\") pod \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\" (UID: \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\") " Jan 03 03:45:02 crc kubenswrapper[4746]: I0103 03:45:02.452715 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98230655-6a3c-43e6-b2a4-4ab5451c8e28-secret-volume\") pod \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\" (UID: \"98230655-6a3c-43e6-b2a4-4ab5451c8e28\") " Jan 03 03:45:02 crc kubenswrapper[4746]: I0103 03:45:02.454963 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98230655-6a3c-43e6-b2a4-4ab5451c8e28-config-volume" (OuterVolumeSpecName: "config-volume") pod "98230655-6a3c-43e6-b2a4-4ab5451c8e28" (UID: "98230655-6a3c-43e6-b2a4-4ab5451c8e28"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 03 03:45:02 crc kubenswrapper[4746]: I0103 03:45:02.458876 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98230655-6a3c-43e6-b2a4-4ab5451c8e28-kube-api-access-k589p" (OuterVolumeSpecName: "kube-api-access-k589p") pod "98230655-6a3c-43e6-b2a4-4ab5451c8e28" (UID: "98230655-6a3c-43e6-b2a4-4ab5451c8e28"). InnerVolumeSpecName "kube-api-access-k589p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 03 03:45:02 crc kubenswrapper[4746]: I0103 03:45:02.459023 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98230655-6a3c-43e6-b2a4-4ab5451c8e28-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "98230655-6a3c-43e6-b2a4-4ab5451c8e28" (UID: "98230655-6a3c-43e6-b2a4-4ab5451c8e28"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 03 03:45:02 crc kubenswrapper[4746]: I0103 03:45:02.555510 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98230655-6a3c-43e6-b2a4-4ab5451c8e28-config-volume\") on node \"crc\" DevicePath \"\"" Jan 03 03:45:02 crc kubenswrapper[4746]: I0103 03:45:02.555590 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98230655-6a3c-43e6-b2a4-4ab5451c8e28-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 03 03:45:02 crc kubenswrapper[4746]: I0103 03:45:02.555624 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k589p\" (UniqueName: \"kubernetes.io/projected/98230655-6a3c-43e6-b2a4-4ab5451c8e28-kube-api-access-k589p\") on node \"crc\" DevicePath \"\"" Jan 03 03:45:03 crc kubenswrapper[4746]: I0103 03:45:03.160771 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8" event={"ID":"98230655-6a3c-43e6-b2a4-4ab5451c8e28","Type":"ContainerDied","Data":"d81f14194b2eba1887cf547e1c148d035d37ec83b200a35f8be1542dd4f04c5a"} Jan 03 03:45:03 crc kubenswrapper[4746]: I0103 03:45:03.161168 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d81f14194b2eba1887cf547e1c148d035d37ec83b200a35f8be1542dd4f04c5a" Jan 03 03:45:03 crc kubenswrapper[4746]: I0103 03:45:03.160895 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29456865-f27k8"